Test Report: Hyperkit_macOS 17777

                    
                      ae144fcddc3654c644548c9cf831271f2087ad79:2023-12-12:32259
                    
                

Test fail (14/323)

x
+
TestMultiNode/serial/FreshStart2Nodes (15.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-449000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (14.925390987s)

                                                
                                                
-- stdout --
	* [multinode-449000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17777
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node multinode-449000 in cluster multinode-449000
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:10:21.470862    3520 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:10:21.471091    3520 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:21.471096    3520 out.go:309] Setting ErrFile to fd 2...
	I1212 15:10:21.471100    3520 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:21.471271    3520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:10:21.472701    3520 out.go:303] Setting JSON to false
	I1212 15:10:21.494932    3520 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2392,"bootTime":1702420229,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 15:10:21.495046    3520 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:10:21.517392    3520 out.go:177] * [multinode-449000] minikube v1.32.0 on Darwin 14.2
	I1212 15:10:21.560123    3520 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 15:10:21.560302    3520 notify.go:220] Checking for updates...
	I1212 15:10:21.602849    3520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:10:21.624063    3520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:10:21.644956    3520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:10:21.665952    3520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:10:21.688947    3520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:10:21.710566    3520 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:10:21.741008    3520 out.go:177] * Using the hyperkit driver based on user configuration
	I1212 15:10:21.783147    3520 start.go:298] selected driver: hyperkit
	I1212 15:10:21.783174    3520 start.go:902] validating driver "hyperkit" against <nil>
	I1212 15:10:21.783197    3520 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:10:21.787528    3520 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:10:21.787623    3520 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 15:10:21.795421    3520 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 15:10:21.799229    3520 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:10:21.799252    3520 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 15:10:21.799282    3520 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 15:10:21.799494    3520 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 15:10:21.799562    3520 cni.go:84] Creating CNI manager for ""
	I1212 15:10:21.799570    3520 cni.go:136] 0 nodes found, recommending kindnet
	I1212 15:10:21.799577    3520 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 15:10:21.799589    3520 start_flags.go:323] config:
	{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:10:21.799729    3520 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:10:21.842121    3520 out.go:177] * Starting control plane node multinode-449000 in cluster multinode-449000
	I1212 15:10:21.863225    3520 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:10:21.863296    3520 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:10:21.863329    3520 cache.go:56] Caching tarball of preloaded images
	I1212 15:10:21.863543    3520 preload.go:174] Found /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:10:21.863562    3520 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:10:21.864073    3520 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/config.json ...
	I1212 15:10:21.864111    3520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/config.json: {Name:mkc2472e7d5f2805774069becb49f4ae7180bc73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:10:21.864879    3520 start.go:365] acquiring machines lock for multinode-449000: {Name:mk51496c390b032727acf9b9a5f67e389f19ec26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 15:10:21.865012    3520 start.go:369] acquired machines lock for "multinode-449000" in 111.579µs
	I1212 15:10:21.865058    3520 start.go:93] Provisioning new machine with config: &{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:10:21.865140    3520 start.go:125] createHost starting for "" (driver="hyperkit")
	I1212 15:10:21.906971    3520 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 15:10:21.907368    3520 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:10:21.907445    3520 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:10:21.916498    3520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51139
	I1212 15:10:21.916865    3520 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:10:21.917284    3520 main.go:141] libmachine: Using API Version  1
	I1212 15:10:21.917294    3520 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:10:21.917536    3520 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:10:21.917637    3520 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:10:21.917727    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:21.917831    3520 start.go:159] libmachine.API.Create for "multinode-449000" (driver="hyperkit")
	I1212 15:10:21.917854    3520 client.go:168] LocalClient.Create starting
	I1212 15:10:21.917887    3520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem
	I1212 15:10:21.917936    3520 main.go:141] libmachine: Decoding PEM data...
	I1212 15:10:21.917962    3520 main.go:141] libmachine: Parsing certificate...
	I1212 15:10:21.918030    3520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem
	I1212 15:10:21.918066    3520 main.go:141] libmachine: Decoding PEM data...
	I1212 15:10:21.918078    3520 main.go:141] libmachine: Parsing certificate...
	I1212 15:10:21.918091    3520 main.go:141] libmachine: Running pre-create checks...
	I1212 15:10:21.918102    3520 main.go:141] libmachine: (multinode-449000) Calling .PreCreateCheck
	I1212 15:10:21.918184    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:21.918350    3520 main.go:141] libmachine: (multinode-449000) Calling .GetConfigRaw
	I1212 15:10:21.918769    3520 main.go:141] libmachine: Creating machine...
	I1212 15:10:21.918777    3520 main.go:141] libmachine: (multinode-449000) Calling .Create
	I1212 15:10:21.918852    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:21.919006    3520 main.go:141] libmachine: (multinode-449000) DBG | I1212 15:10:21.918850    3528 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:10:21.919060    3520 main.go:141] libmachine: (multinode-449000) Downloading /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17777-1259/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 15:10:22.081162    3520 main.go:141] libmachine: (multinode-449000) DBG | I1212 15:10:22.081062    3528 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa...
	I1212 15:10:22.262831    3520 main.go:141] libmachine: (multinode-449000) DBG | I1212 15:10:22.262742    3528 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/multinode-449000.rawdisk...
	I1212 15:10:22.262846    3520 main.go:141] libmachine: (multinode-449000) DBG | Writing magic tar header
	I1212 15:10:22.262859    3520 main.go:141] libmachine: (multinode-449000) DBG | Writing SSH key tar header
	I1212 15:10:22.263698    3520 main.go:141] libmachine: (multinode-449000) DBG | I1212 15:10:22.263637    3528 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000 ...
	I1212 15:10:22.587520    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:22.587539    3520 main.go:141] libmachine: (multinode-449000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid
	I1212 15:10:22.587569    3520 main.go:141] libmachine: (multinode-449000) DBG | Using UUID 9fde523a-9943-11ee-8111-f01898ef957c
	I1212 15:10:22.708128    3520 main.go:141] libmachine: (multinode-449000) DBG | Generated MAC f2:78:2:3f:65:80
	I1212 15:10:22.708160    3520 main.go:141] libmachine: (multinode-449000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000
	I1212 15:10:22.708216    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fde523a-9943-11ee-8111-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 15:10:22.708267    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fde523a-9943-11ee-8111-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 15:10:22.708345    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9fde523a-9943-11ee-8111-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/multinode-449000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/1777
7-1259/.minikube/machines/multinode-449000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"}
	I1212 15:10:22.708406    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9fde523a-9943-11ee-8111-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/multinode-449000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/console-ring -f kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"
	I1212 15:10:22.708430    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 15:10:22.711161    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 DEBUG: hyperkit: Pid is 3531
	I1212 15:10:22.711630    3520 main.go:141] libmachine: (multinode-449000) DBG | Attempt 0
	I1212 15:10:22.711645    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:22.711775    3520 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:10:22.712666    3520 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:10:22.712757    3520 main.go:141] libmachine: (multinode-449000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 15:10:22.712771    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:10:22.712804    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:10:22.712826    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:10:22.712850    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:10:22.712859    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:10:22.712866    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:10:22.712873    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:10:22.712884    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:10:22.712893    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:10:22.712900    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:10:22.712910    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:10:22.718886    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 15:10:22.771819    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 15:10:22.772392    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:10:22.772413    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:10:22.772422    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:10:22.772432    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:10:23.139450    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 15:10:23.139466    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 15:10:23.243394    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:10:23.243412    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:10:23.243430    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:10:23.243446    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:10:23.244344    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 15:10:23.244367    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 15:10:24.713398    3520 main.go:141] libmachine: (multinode-449000) DBG | Attempt 1
	I1212 15:10:24.713419    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:24.713488    3520 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:10:24.714567    3520 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:10:24.714624    3520 main.go:141] libmachine: (multinode-449000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 15:10:24.714642    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:10:24.714652    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:10:24.714663    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:10:24.714672    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:10:24.714690    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:10:24.714701    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:10:24.714720    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:10:24.714730    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:10:24.714738    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:10:24.714747    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:10:24.714759    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:10:26.715960    3520 main.go:141] libmachine: (multinode-449000) DBG | Attempt 2
	I1212 15:10:26.715982    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:26.716076    3520 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:10:26.716884    3520 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:10:26.716925    3520 main.go:141] libmachine: (multinode-449000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 15:10:26.716941    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:10:26.716956    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:10:26.716967    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:10:26.716986    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:10:26.716996    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:10:26.717004    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:10:26.717014    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:10:26.717022    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:10:26.717031    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:10:26.717043    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:10:26.717052    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:10:28.164384    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 15:10:28.164478    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 15:10:28.164487    3520 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:10:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 15:10:28.717827    3520 main.go:141] libmachine: (multinode-449000) DBG | Attempt 3
	I1212 15:10:28.717844    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:28.717954    3520 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:10:28.718762    3520 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:10:28.718820    3520 main.go:141] libmachine: (multinode-449000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 15:10:28.718832    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:10:28.718865    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:10:28.718878    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:10:28.718887    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:10:28.718916    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:10:28.718932    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:10:28.718952    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:10:28.718966    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:10:28.718976    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:10:28.718987    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:10:28.718996    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:10:30.719017    3520 main.go:141] libmachine: (multinode-449000) DBG | Attempt 4
	I1212 15:10:30.719033    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:30.719089    3520 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:10:30.719912    3520 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:10:30.719933    3520 main.go:141] libmachine: (multinode-449000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 15:10:30.719956    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:10:30.719966    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:10:30.719977    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:10:30.719991    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:10:30.720004    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:10:30.720035    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:10:30.720046    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:10:30.720057    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:10:30.720066    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:10:30.720075    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:10:30.720083    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:10:32.721385    3520 main.go:141] libmachine: (multinode-449000) DBG | Attempt 5
	I1212 15:10:32.721409    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:32.721493    3520 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:10:32.722946    3520 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:10:32.723037    3520 main.go:141] libmachine: (multinode-449000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I1212 15:10:32.723054    3520 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a39e7}
	I1212 15:10:32.723067    3520 main.go:141] libmachine: (multinode-449000) DBG | Found match: f2:78:2:3f:65:80
	I1212 15:10:32.723076    3520 main.go:141] libmachine: (multinode-449000) DBG | IP: 192.169.0.13
	I1212 15:10:32.723121    3520 main.go:141] libmachine: (multinode-449000) Calling .GetConfigRaw
	I1212 15:10:32.723845    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:32.724030    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:32.724167    3520 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 15:10:32.724186    3520 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:10:32.724321    3520 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:10:32.724404    3520 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:10:32.725322    3520 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 15:10:32.725334    3520 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 15:10:32.725339    3520 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 15:10:32.725346    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:32.725449    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:32.725543    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.725645    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.725740    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:32.725871    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:32.726166    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:32.726174    3520 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 15:10:32.787810    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:10:32.787823    3520 main.go:141] libmachine: Detecting the provisioner...
	I1212 15:10:32.787829    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:32.787955    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:32.788045    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.788145    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.788240    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:32.788377    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:32.788636    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:32.788645    3520 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 15:10:32.851671    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 15:10:32.851729    3520 main.go:141] libmachine: found compatible host: buildroot
	I1212 15:10:32.851736    3520 main.go:141] libmachine: Provisioning with buildroot...
	I1212 15:10:32.851744    3520 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:10:32.851871    3520 buildroot.go:166] provisioning hostname "multinode-449000"
	I1212 15:10:32.851880    3520 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:10:32.851972    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:32.852052    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:32.852137    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.852221    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.852307    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:32.852440    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:32.852677    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:32.852686    3520 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000 && echo "multinode-449000" | sudo tee /etc/hostname
	I1212 15:10:32.924230    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000
	
	I1212 15:10:32.924250    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:32.924382    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:32.924474    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.924572    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:32.924685    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:32.924833    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:32.925103    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:32.925116    3520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 15:10:32.991426    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:10:32.991447    3520 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17777-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17777-1259/.minikube}
	I1212 15:10:32.991460    3520 buildroot.go:174] setting up certificates
	I1212 15:10:32.991472    3520 provision.go:83] configureAuth start
	I1212 15:10:32.991480    3520 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:10:32.991635    3520 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:10:32.991723    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:32.991811    3520 provision.go:138] copyHostCerts
	I1212 15:10:32.991844    3520 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:10:32.991892    3520 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem, removing ...
	I1212 15:10:32.991901    3520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:10:32.992029    3520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem (1082 bytes)
	I1212 15:10:32.992238    3520 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:10:32.992269    3520 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem, removing ...
	I1212 15:10:32.992274    3520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:10:32.992361    3520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem (1123 bytes)
	I1212 15:10:32.992514    3520 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:10:32.992561    3520 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem, removing ...
	I1212 15:10:32.992566    3520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:10:32.992646    3520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem (1675 bytes)
	I1212 15:10:32.992792    3520 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem org=jenkins.multinode-449000 san=[192.169.0.13 192.169.0.13 localhost 127.0.0.1 minikube multinode-449000]
	I1212 15:10:33.047926    3520 provision.go:172] copyRemoteCerts
	I1212 15:10:33.047979    3520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 15:10:33.047993    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.048190    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.048288    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.048404    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.048493    3520 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:10:33.086749    3520 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 15:10:33.086808    3520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 15:10:33.102401    3520 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 15:10:33.102474    3520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 15:10:33.118183    3520 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 15:10:33.118247    3520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 15:10:33.133764    3520 provision.go:86] duration metric: configureAuth took 142.279509ms
	I1212 15:10:33.133776    3520 buildroot.go:189] setting minikube options for container-runtime
	I1212 15:10:33.133905    3520 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:10:33.133918    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:33.134046    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.134129    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.134218    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.134301    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.134368    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.134486    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:33.134724    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:33.134733    3520 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 15:10:33.198604    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 15:10:33.198622    3520 buildroot.go:70] root file system type: tmpfs
	I1212 15:10:33.198695    3520 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 15:10:33.198707    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.198846    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.198940    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.199034    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.199124    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.199253    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:33.199500    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:33.199544    3520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 15:10:33.271466    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 15:10:33.271486    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.271618    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.271707    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.271794    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.271888    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.272018    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:33.272264    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:33.272277    3520 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 15:10:33.763707    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 15:10:33.763723    3520 main.go:141] libmachine: Checking connection to Docker...
	I1212 15:10:33.763730    3520 main.go:141] libmachine: (multinode-449000) Calling .GetURL
	I1212 15:10:33.763867    3520 main.go:141] libmachine: Docker is up and running!
	I1212 15:10:33.763876    3520 main.go:141] libmachine: Reticulating splines...
	I1212 15:10:33.763885    3520 client.go:171] LocalClient.Create took 11.846102764s
	I1212 15:10:33.763901    3520 start.go:167] duration metric: libmachine.API.Create for "multinode-449000" took 11.846154994s
	I1212 15:10:33.763909    3520 start.go:300] post-start starting for "multinode-449000" (driver="hyperkit")
	I1212 15:10:33.763919    3520 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 15:10:33.763929    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:33.764073    3520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 15:10:33.764085    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.764170    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.764253    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.764335    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.764417    3520 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:10:33.801129    3520 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 15:10:33.803879    3520 command_runner.go:130] > NAME=Buildroot
	I1212 15:10:33.803887    3520 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 15:10:33.803891    3520 command_runner.go:130] > ID=buildroot
	I1212 15:10:33.803895    3520 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 15:10:33.803899    3520 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 15:10:33.803984    3520 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 15:10:33.803997    3520 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/addons for local assets ...
	I1212 15:10:33.804094    3520 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/files for local assets ...
	I1212 15:10:33.804280    3520 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> 17202.pem in /etc/ssl/certs
	I1212 15:10:33.804287    3520 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> /etc/ssl/certs/17202.pem
	I1212 15:10:33.804488    3520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 15:10:33.810114    3520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:10:33.826415    3520 start.go:303] post-start completed in 62.499244ms
	I1212 15:10:33.826441    3520 main.go:141] libmachine: (multinode-449000) Calling .GetConfigRaw
	I1212 15:10:33.827036    3520 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:10:33.827188    3520 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/config.json ...
	I1212 15:10:33.827523    3520 start.go:128] duration metric: createHost completed in 11.9624502s
	I1212 15:10:33.827540    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.827658    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.827755    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.827850    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.827928    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.828031    3520 main.go:141] libmachine: Using SSH client type: native
	I1212 15:10:33.828266    3520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:10:33.828275    3520 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 15:10:33.890954    3520 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422633.772668635
	
	I1212 15:10:33.890968    3520 fix.go:206] guest clock: 1702422633.772668635
	I1212 15:10:33.890973    3520 fix.go:219] Guest: 2023-12-12 15:10:33.772668635 -0800 PST Remote: 2023-12-12 15:10:33.827533 -0800 PST m=+12.400650783 (delta=-54.864365ms)
	I1212 15:10:33.890995    3520 fix.go:190] guest clock delta is within tolerance: -54.864365ms
	I1212 15:10:33.891000    3520 start.go:83] releasing machines lock for "multinode-449000", held for 12.026059761s
	I1212 15:10:33.891031    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:33.891161    3520 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:10:33.891260    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:33.891541    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:33.891651    3520 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:10:33.891737    3520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 15:10:33.891771    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.891821    3520 ssh_runner.go:195] Run: cat /version.json
	I1212 15:10:33.891834    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:10:33.891870    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.891928    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:10:33.891953    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.892036    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.892049    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:10:33.892125    3520 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:10:33.892142    3520 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:10:33.892232    3520 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:10:33.925494    3520 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 15:10:33.925750    3520 ssh_runner.go:195] Run: systemctl --version
	I1212 15:10:33.980304    3520 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 15:10:33.981207    3520 command_runner.go:130] > systemd 247 (247)
	I1212 15:10:33.981234    3520 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 15:10:33.981429    3520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 15:10:33.985385    3520 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 15:10:33.985470    3520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 15:10:33.985520    3520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 15:10:33.995781    3520 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 15:10:33.996008    3520 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 15:10:33.996025    3520 start.go:475] detecting cgroup driver to use...
	I1212 15:10:33.996125    3520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:10:34.008962    3520 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 15:10:34.009280    3520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 15:10:34.016301    3520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 15:10:34.023324    3520 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 15:10:34.023367    3520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 15:10:34.030325    3520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:10:34.037329    3520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 15:10:34.044233    3520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:10:34.051364    3520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 15:10:34.058569    3520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 15:10:34.065719    3520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 15:10:34.071832    3520 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 15:10:34.072005    3520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 15:10:34.078430    3520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:10:34.168397    3520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 15:10:34.181783    3520 start.go:475] detecting cgroup driver to use...
	I1212 15:10:34.181861    3520 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 15:10:34.193817    3520 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 15:10:34.194043    3520 command_runner.go:130] > [Unit]
	I1212 15:10:34.194053    3520 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 15:10:34.194058    3520 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 15:10:34.194063    3520 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 15:10:34.194067    3520 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 15:10:34.194072    3520 command_runner.go:130] > StartLimitBurst=3
	I1212 15:10:34.194076    3520 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 15:10:34.194080    3520 command_runner.go:130] > [Service]
	I1212 15:10:34.194083    3520 command_runner.go:130] > Type=notify
	I1212 15:10:34.194087    3520 command_runner.go:130] > Restart=on-failure
	I1212 15:10:34.194109    3520 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 15:10:34.194121    3520 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 15:10:34.194128    3520 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 15:10:34.194133    3520 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 15:10:34.194138    3520 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 15:10:34.194144    3520 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 15:10:34.194163    3520 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 15:10:34.194172    3520 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 15:10:34.194178    3520 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 15:10:34.194182    3520 command_runner.go:130] > ExecStart=
	I1212 15:10:34.194193    3520 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I1212 15:10:34.194198    3520 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 15:10:34.194205    3520 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 15:10:34.194210    3520 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 15:10:34.194214    3520 command_runner.go:130] > LimitNOFILE=infinity
	I1212 15:10:34.194218    3520 command_runner.go:130] > LimitNPROC=infinity
	I1212 15:10:34.194221    3520 command_runner.go:130] > LimitCORE=infinity
	I1212 15:10:34.194226    3520 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 15:10:34.194230    3520 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 15:10:34.194238    3520 command_runner.go:130] > TasksMax=infinity
	I1212 15:10:34.194241    3520 command_runner.go:130] > TimeoutStartSec=0
	I1212 15:10:34.194247    3520 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 15:10:34.194250    3520 command_runner.go:130] > Delegate=yes
	I1212 15:10:34.194258    3520 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 15:10:34.194263    3520 command_runner.go:130] > KillMode=process
	I1212 15:10:34.194267    3520 command_runner.go:130] > [Install]
	I1212 15:10:34.194276    3520 command_runner.go:130] > WantedBy=multi-user.target
	I1212 15:10:34.194462    3520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:10:34.210531    3520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 15:10:34.226572    3520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:10:34.235382    3520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:10:34.243733    3520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 15:10:34.262334    3520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:10:34.271520    3520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:10:34.283199    3520 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 15:10:34.283546    3520 ssh_runner.go:195] Run: which cri-dockerd
	I1212 15:10:34.285916    3520 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 15:10:34.286131    3520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 15:10:34.292463    3520 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 15:10:34.303332    3520 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 15:10:34.388185    3520 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 15:10:34.487306    3520 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 15:10:34.487389    3520 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 15:10:34.498805    3520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:10:34.585007    3520 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 15:10:35.817139    3520 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.232121014s)
	I1212 15:10:35.817200    3520 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:10:35.900738    3520 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 15:10:35.996010    3520 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:10:36.086865    3520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:10:36.174513    3520 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 15:10:36.183908    3520 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1212 15:10:36.184023    3520 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 15:10:36.190794    3520 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:10:30 UTC, ends at Tue 2023-12-12 23:10:36 UTC. --
	I1212 15:10:36.190804    3520 command_runner.go:130] > Dec 12 23:10:31 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 15:10:36.190810    3520 command_runner.go:130] > Dec 12 23:10:31 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 15:10:36.190827    3520 command_runner.go:130] > Dec 12 23:10:33 multinode-449000 systemd[1]: cri-docker.socket: Succeeded.
	I1212 15:10:36.190833    3520 command_runner.go:130] > Dec 12 23:10:33 multinode-449000 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 15:10:36.190839    3520 command_runner.go:130] > Dec 12 23:10:33 multinode-449000 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 15:10:36.190850    3520 command_runner.go:130] > Dec 12 23:10:33 multinode-449000 systemd[1]: Starting CRI Docker Socket for the API.
	I1212 15:10:36.190857    3520 command_runner.go:130] > Dec 12 23:10:33 multinode-449000 systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 15:10:36.190862    3520 command_runner.go:130] > Dec 12 23:10:36 multinode-449000 systemd[1]: cri-docker.socket: Succeeded.
	I1212 15:10:36.190867    3520 command_runner.go:130] > Dec 12 23:10:36 multinode-449000 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 15:10:36.190873    3520 command_runner.go:130] > Dec 12 23:10:36 multinode-449000 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 15:10:36.190883    3520 command_runner.go:130] > Dec 12 23:10:36 multinode-449000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1212 15:10:36.190892    3520 command_runner.go:130] > Dec 12 23:10:36 multinode-449000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1212 15:10:36.213505    3520 out.go:177] 
	W1212 15:10:36.233430    3520 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:10:30 UTC, ends at Tue 2023-12-12 23:10:36 UTC. --
	Dec 12 23:10:31 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:10:31 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:10:36 multinode-449000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:10:36 multinode-449000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:10:36 multinode-449000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:10:36 multinode-449000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 23:10:36 multinode-449000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:10:30 UTC, ends at Tue 2023-12-12 23:10:36 UTC. --
	Dec 12 23:10:31 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:10:31 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:10:33 multinode-449000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:10:36 multinode-449000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:10:36 multinode-449000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:10:36 multinode-449000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:10:36 multinode-449000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 23:10:36 multinode-449000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 15:10:36.233459    3520 out.go:239] * 
	* 
	W1212 15:10:36.235961    3520 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 15:10:36.298340    3520 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-449000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (150.522465ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:10:36.496680    3536 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (15.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (105.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (95.321924ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-449000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- rollout status deployment/busybox: exit status 1 (91.406947ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.575683ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.346311ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.243569ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.520446ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.654695ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.577292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.252275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.461838ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 15:11:20.342550    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:20.349086    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:20.361301    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:20.381889    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:20.422377    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:20.503395    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:20.665662    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:20.986532    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:21.627489    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:22.909205    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:25.470085    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:30.592249    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.71025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 15:11:40.839331    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:11:51.608524    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.11588ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 15:12:01.320241    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.758755ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (91.532584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec  -- nslookup kubernetes.io: exit status 1 (91.68648ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec  -- nslookup kubernetes.default: exit status 1 (91.771401ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (91.858456ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (144.134044ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:21.971348    3614 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (105.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (91.395584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (145.206513ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:22.208267    3622 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-449000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-449000 -v 3 --alsologtostderr: exit status 119 (201.744743ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-449000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:12:22.272081    3627 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:12:22.272471    3627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:22.272477    3627 out.go:309] Setting ErrFile to fd 2...
	I1212 15:12:22.272481    3627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:22.272695    3627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:12:22.273037    3627 mustload.go:65] Loading cluster: multinode-449000
	I1212 15:12:22.273355    3627 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:12:22.273722    3627 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:22.273774    3627 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:12:22.281325    3627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51186
	I1212 15:12:22.281752    3627 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:12:22.282189    3627 main.go:141] libmachine: Using API Version  1
	I1212 15:12:22.282198    3627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:12:22.282407    3627 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:12:22.282529    3627 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:12:22.282616    3627 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:12:22.282688    3627 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:12:22.283644    3627 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:12:22.283888    3627 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:22.283912    3627 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:12:22.291613    3627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51188
	I1212 15:12:22.291962    3627 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:12:22.292272    3627 main.go:141] libmachine: Using API Version  1
	I1212 15:12:22.292282    3627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:12:22.292464    3627 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:12:22.292547    3627 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:22.292633    3627 api_server.go:166] Checking apiserver status ...
	I1212 15:12:22.292689    3627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:12:22.292710    3627 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:22.292784    3627 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:22.292896    3627 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:22.292979    3627 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:22.293056    3627 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	W1212 15:12:22.332276    3627 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:12:22.354012    3627 out.go:177] * This control plane is not running! (state=Stopped)
	W1212 15:12:22.374650    3627 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p multinode-449000"
	! This is unusual - you may want to investigate using "minikube logs -p multinode-449000"
	I1212 15:12:22.396510    3627 out.go:177]   To start a cluster, run: "minikube start -p multinode-449000"

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-449000 -v 3 --alsologtostderr" : exit status 119
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (142.805019ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:22.553284    3631 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/AddNode (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-449000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-449000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (35.709978ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-449000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-449000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-449000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (146.837355ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:22.736396    3637 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:156: expected profile "multinode-449000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-449000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-449000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMH
idden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-449000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":
\"\",\"IP\":\"192.169.0.13\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (142.20103ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:23.058797    3647 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status --output json --alsologtostderr: exit status 6 (146.531126ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-449000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:12:23.122882    3652 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:12:23.123194    3652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:23.123199    3652 out.go:309] Setting ErrFile to fd 2...
	I1212 15:12:23.123203    3652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:23.123401    3652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:12:23.123597    3652 out.go:303] Setting JSON to true
	I1212 15:12:23.123620    3652 mustload.go:65] Loading cluster: multinode-449000
	I1212 15:12:23.123669    3652 notify.go:220] Checking for updates...
	I1212 15:12:23.123898    3652 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:12:23.123911    3652 status.go:255] checking status of multinode-449000 ...
	I1212 15:12:23.124260    3652 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:23.124307    3652 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:12:23.132531    3652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51220
	I1212 15:12:23.132911    3652 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:12:23.133340    3652 main.go:141] libmachine: Using API Version  1
	I1212 15:12:23.133350    3652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:12:23.133549    3652 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:12:23.133655    3652 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:12:23.133783    3652 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:12:23.133819    3652 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:12:23.134787    3652 status.go:330] multinode-449000 host status = "Running" (err=<nil>)
	I1212 15:12:23.134807    3652 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:12:23.135041    3652 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:23.135059    3652 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:12:23.142754    3652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51222
	I1212 15:12:23.143085    3652 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:12:23.143405    3652 main.go:141] libmachine: Using API Version  1
	I1212 15:12:23.143429    3652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:12:23.143658    3652 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:12:23.143753    3652 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:12:23.143836    3652 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:12:23.144080    3652 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:23.144104    3652 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:12:23.155538    3652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51224
	I1212 15:12:23.155897    3652 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:12:23.156228    3652 main.go:141] libmachine: Using API Version  1
	I1212 15:12:23.156240    3652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:12:23.156436    3652 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:12:23.156525    3652 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:23.156651    3652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:12:23.156672    3652 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:23.156768    3652 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:23.156846    3652 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:23.156937    3652 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:23.157021    3652 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:12:23.192942    3652 ssh_runner.go:195] Run: systemctl --version
	I1212 15:12:23.196270    3652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1212 15:12:23.205336    3652 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:12:23.205362    3652 api_server.go:166] Checking apiserver status ...
	I1212 15:12:23.205396    3652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:12:23.213050    3652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:12:23.213063    3652 status.go:421] multinode-449000 apiserver status = Stopped (err=<nil>)
	I1212 15:12:23.213071    3652 status.go:257] multinode-449000 status: &{Name:multinode-449000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:176: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-449000 status --output json --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (143.438241ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:23.349233    3657 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/CopyFile (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 node stop m03: exit status 85 (147.385426ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-449000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status: exit status 6 (143.303667ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:23.640276    3664 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:247: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-449000 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (143.244323ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:23.783591    3669 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StopNode (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 node start m03 --alsologtostderr: exit status 85 (145.994613ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:12:23.847180    3674 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:12:23.847509    3674 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:23.847516    3674 out.go:309] Setting ErrFile to fd 2...
	I1212 15:12:23.847520    3674 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:23.847713    3674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:12:23.848062    3674 mustload.go:65] Loading cluster: multinode-449000
	I1212 15:12:23.848365    3674 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:12:23.869607    3674 out.go:177] 
	W1212 15:12:23.891364    3674 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1212 15:12:23.891389    3674 out.go:239] * 
	* 
	W1212 15:12:23.895252    3674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 15:12:23.916058    3674 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1212 15:12:23.847180    3674 out.go:296] Setting OutFile to fd 1 ...
I1212 15:12:23.847509    3674 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:12:23.847516    3674 out.go:309] Setting ErrFile to fd 2...
I1212 15:12:23.847520    3674 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:12:23.847713    3674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
I1212 15:12:23.848062    3674 mustload.go:65] Loading cluster: multinode-449000
I1212 15:12:23.848365    3674 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:12:23.869607    3674 out.go:177] 
W1212 15:12:23.891364    3674 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1212 15:12:23.891389    3674 out.go:239] * 
* 
W1212 15:12:23.895252    3674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 15:12:23.916058    3674 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-449000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status: exit status 6 (143.346859ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:24.073428    3676 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-449000 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 6 (142.864928ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:24.216579    3681 status.go:415] kubeconfig endpoint: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 node delete m03: exit status 80 (244.435401ms)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-449000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-449000 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
multinode_test.go:434: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr": multinode-449000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:438: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr": multinode-449000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:465: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 logs -n 25: (2.036351957s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-449000 -- rollout       | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | status deployment/busybox            |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- exec          | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- nslookup kubernetes.io            |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- exec          | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- nslookup kubernetes.default       |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000                  | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- exec  -- nslookup                 |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| node    | add -p multinode-449000 -v 3         | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-449000 node stop m03       | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	| node    | multinode-449000 node start          | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | m03 --alsologtostderr                |                  |         |         |                     |                     |
	| node    | list -p multinode-449000             | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	| stop    | -p multinode-449000                  | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST | 12 Dec 23 15:12 PST |
	| start   | -p multinode-449000                  | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST | 12 Dec 23 15:13 PST |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-449000             | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:13 PST |                     |
	| node    | multinode-449000 node delete         | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:13 PST |                     |
	|         | m03                                  |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 15:12:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 15:12:32.578719    3693 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:12:32.579011    3693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:32.579016    3693 out.go:309] Setting ErrFile to fd 2...
	I1212 15:12:32.579020    3693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:12:32.579209    3693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:12:32.580602    3693 out.go:303] Setting JSON to false
	I1212 15:12:32.602702    3693 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2523,"bootTime":1702420229,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 15:12:32.602821    3693 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:12:32.624663    3693 out.go:177] * [multinode-449000] minikube v1.32.0 on Darwin 14.2
	I1212 15:12:32.666611    3693 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 15:12:32.666693    3693 notify.go:220] Checking for updates...
	I1212 15:12:32.709533    3693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:12:32.730574    3693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:12:32.772467    3693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:12:32.794378    3693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:12:32.815666    3693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:12:32.837312    3693 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:12:32.837489    3693 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:12:32.838157    3693 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:32.838269    3693 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:12:32.847416    3693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51270
	I1212 15:12:32.847835    3693 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:12:32.848258    3693 main.go:141] libmachine: Using API Version  1
	I1212 15:12:32.848268    3693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:12:32.848516    3693 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:12:32.848646    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:32.877419    3693 out.go:177] * Using the hyperkit driver based on existing profile
	I1212 15:12:32.919592    3693 start.go:298] selected driver: hyperkit
	I1212 15:12:32.919618    3693 start.go:902] validating driver "hyperkit" against &{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:12:32.919784    3693 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:12:32.920017    3693 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:12:32.920210    3693 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 15:12:32.929337    3693 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 15:12:32.933108    3693 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:32.933130    3693 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 15:12:32.935766    3693 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 15:12:32.935836    3693 cni.go:84] Creating CNI manager for ""
	I1212 15:12:32.935845    3693 cni.go:136] 1 nodes found, recommending kindnet
	I1212 15:12:32.935852    3693 start_flags.go:323] config:
	{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:12:32.936016    3693 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:12:32.978477    3693 out.go:177] * Starting control plane node multinode-449000 in cluster multinode-449000
	I1212 15:12:32.999477    3693 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:12:32.999548    3693 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:12:32.999615    3693 cache.go:56] Caching tarball of preloaded images
	I1212 15:12:32.999824    3693 preload.go:174] Found /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:12:32.999844    3693 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:12:32.999983    3693 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/config.json ...
	I1212 15:12:33.000928    3693 start.go:365] acquiring machines lock for multinode-449000: {Name:mk51496c390b032727acf9b9a5f67e389f19ec26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 15:12:33.001029    3693 start.go:369] acquired machines lock for "multinode-449000" in 83.884µs
	I1212 15:12:33.001054    3693 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:12:33.001064    3693 fix.go:54] fixHost starting: 
	I1212 15:12:33.001349    3693 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:12:33.001376    3693 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:12:33.009746    3693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51272
	I1212 15:12:33.010128    3693 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:12:33.010481    3693 main.go:141] libmachine: Using API Version  1
	I1212 15:12:33.010494    3693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:12:33.010739    3693 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:12:33.010866    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:33.010987    3693 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:12:33.011072    3693 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:12:33.011145    3693 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3531
	I1212 15:12:33.012099    3693 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 3531 missing from process table
	I1212 15:12:33.012125    3693 fix.go:102] recreateIfNeeded on multinode-449000: state=Stopped err=<nil>
	I1212 15:12:33.012142    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	W1212 15:12:33.012229    3693 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 15:12:33.033339    3693 out.go:177] * Restarting existing hyperkit VM for "multinode-449000" ...
	I1212 15:12:33.054562    3693 main.go:141] libmachine: (multinode-449000) Calling .Start
	I1212 15:12:33.054822    3693 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:12:33.054884    3693 main.go:141] libmachine: (multinode-449000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid
	I1212 15:12:33.056767    3693 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 3531 missing from process table
	I1212 15:12:33.056786    3693 main.go:141] libmachine: (multinode-449000) DBG | pid 3531 is in state "Stopped"
	I1212 15:12:33.056806    3693 main.go:141] libmachine: (multinode-449000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid...
	I1212 15:12:33.057017    3693 main.go:141] libmachine: (multinode-449000) DBG | Using UUID 9fde523a-9943-11ee-8111-f01898ef957c
	I1212 15:12:33.175971    3693 main.go:141] libmachine: (multinode-449000) DBG | Generated MAC f2:78:2:3f:65:80
	I1212 15:12:33.176004    3693 main.go:141] libmachine: (multinode-449000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000
	I1212 15:12:33.176170    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fde523a-9943-11ee-8111-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043aa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 15:12:33.176210    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fde523a-9943-11ee-8111-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043aa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 15:12:33.176256    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9fde523a-9943-11ee-8111-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/multinode-449000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/1777
7-1259/.minikube/machines/multinode-449000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"}
	I1212 15:12:33.176359    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9fde523a-9943-11ee-8111-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/multinode-449000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/console-ring -f kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"
	I1212 15:12:33.176391    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 15:12:33.177869    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 DEBUG: hyperkit: Pid is 3705
	I1212 15:12:33.178322    3693 main.go:141] libmachine: (multinode-449000) DBG | Attempt 0
	I1212 15:12:33.178358    3693 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:12:33.178452    3693 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3705
	I1212 15:12:33.180037    3693 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:12:33.180111    3693 main.go:141] libmachine: (multinode-449000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I1212 15:12:33.180126    3693 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a39e7}
	I1212 15:12:33.180137    3693 main.go:141] libmachine: (multinode-449000) DBG | Found match: f2:78:2:3f:65:80
	I1212 15:12:33.180147    3693 main.go:141] libmachine: (multinode-449000) DBG | IP: 192.169.0.13
	I1212 15:12:33.180217    3693 main.go:141] libmachine: (multinode-449000) Calling .GetConfigRaw
	I1212 15:12:33.180847    3693 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:12:33.181013    3693 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/config.json ...
	I1212 15:12:33.181381    3693 machine.go:88] provisioning docker machine ...
	I1212 15:12:33.181392    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:33.181496    3693 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:12:33.181612    3693 buildroot.go:166] provisioning hostname "multinode-449000"
	I1212 15:12:33.181623    3693 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:12:33.181721    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:33.181815    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:33.181925    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:33.182041    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:33.182130    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:33.182285    3693 main.go:141] libmachine: Using SSH client type: native
	I1212 15:12:33.182581    3693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:12:33.182594    3693 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000 && echo "multinode-449000" | sudo tee /etc/hostname
	I1212 15:12:33.185597    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 15:12:33.244392    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 15:12:33.245038    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:12:33.245058    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:12:33.245068    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:12:33.245085    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:12:33.609612    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 15:12:33.609626    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 15:12:33.713704    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:12:33.713733    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:12:33.713748    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:12:33.713760    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:12:33.714583    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 15:12:33.714594    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 15:12:38.623164    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 15:12:38.623272    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 15:12:38.623285    3693 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:12:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 15:12:44.264613    3693 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000
	
	I1212 15:12:44.264632    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:44.264769    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:44.264870    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.264955    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.265048    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:44.265180    3693 main.go:141] libmachine: Using SSH client type: native
	I1212 15:12:44.265436    3693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:12:44.265448    3693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 15:12:44.334859    3693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:12:44.334876    3693 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17777-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17777-1259/.minikube}
	I1212 15:12:44.334894    3693 buildroot.go:174] setting up certificates
	I1212 15:12:44.334904    3693 provision.go:83] configureAuth start
	I1212 15:12:44.334912    3693 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:12:44.335045    3693 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:12:44.335145    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:44.335240    3693 provision.go:138] copyHostCerts
	I1212 15:12:44.335271    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:12:44.335332    3693 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem, removing ...
	I1212 15:12:44.335341    3693 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:12:44.335522    3693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem (1082 bytes)
	I1212 15:12:44.335744    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:12:44.335783    3693 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem, removing ...
	I1212 15:12:44.335788    3693 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:12:44.335882    3693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem (1123 bytes)
	I1212 15:12:44.336032    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:12:44.336068    3693 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem, removing ...
	I1212 15:12:44.336073    3693 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:12:44.336186    3693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem (1675 bytes)
	I1212 15:12:44.336357    3693 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem org=jenkins.multinode-449000 san=[192.169.0.13 192.169.0.13 localhost 127.0.0.1 minikube multinode-449000]
	I1212 15:12:44.383535    3693 provision.go:172] copyRemoteCerts
	I1212 15:12:44.383587    3693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 15:12:44.383605    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:44.383717    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:44.383824    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.383928    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:44.384020    3693 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:12:44.421853    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 15:12:44.421921    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 15:12:44.437517    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 15:12:44.437575    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 15:12:44.453177    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 15:12:44.453227    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 15:12:44.468767    3693 provision.go:86] duration metric: configureAuth took 133.852329ms
	I1212 15:12:44.468778    3693 buildroot.go:189] setting minikube options for container-runtime
	I1212 15:12:44.468896    3693 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:12:44.468915    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:44.469047    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:44.469134    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:44.469221    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.469302    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.469379    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:44.469488    3693 main.go:141] libmachine: Using SSH client type: native
	I1212 15:12:44.469718    3693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:12:44.469726    3693 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 15:12:44.535938    3693 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 15:12:44.535949    3693 buildroot.go:70] root file system type: tmpfs
	I1212 15:12:44.536025    3693 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 15:12:44.536039    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:44.536161    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:44.536268    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.536348    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.536431    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:44.536562    3693 main.go:141] libmachine: Using SSH client type: native
	I1212 15:12:44.536813    3693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:12:44.536858    3693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 15:12:44.610147    3693 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 15:12:44.610168    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:44.610319    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:44.610433    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.610543    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:44.610629    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:44.610752    3693 main.go:141] libmachine: Using SSH client type: native
	I1212 15:12:44.610986    3693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:12:44.610998    3693 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 15:12:45.124329    3693 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 15:12:45.124344    3693 machine.go:91] provisioned docker machine in 11.943037065s
	I1212 15:12:45.124354    3693 start.go:300] post-start starting for "multinode-449000" (driver="hyperkit")
	I1212 15:12:45.124370    3693 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 15:12:45.124384    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:45.124592    3693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 15:12:45.124606    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:45.124705    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:45.124810    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:45.124921    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:45.125016    3693 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:12:45.164380    3693 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 15:12:45.166759    3693 command_runner.go:130] > NAME=Buildroot
	I1212 15:12:45.166769    3693 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 15:12:45.166775    3693 command_runner.go:130] > ID=buildroot
	I1212 15:12:45.166779    3693 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 15:12:45.166786    3693 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 15:12:45.166958    3693 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 15:12:45.166971    3693 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/addons for local assets ...
	I1212 15:12:45.167068    3693 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/files for local assets ...
	I1212 15:12:45.167248    3693 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> 17202.pem in /etc/ssl/certs
	I1212 15:12:45.167255    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> /etc/ssl/certs/17202.pem
	I1212 15:12:45.167460    3693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 15:12:45.173790    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:12:45.189402    3693 start.go:303] post-start completed in 65.033975ms
	I1212 15:12:45.189411    3693 fix.go:56] fixHost completed within 12.188433725s
	I1212 15:12:45.189427    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:45.189559    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:45.189651    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:45.189731    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:45.189807    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:45.189918    3693 main.go:141] libmachine: Using SSH client type: native
	I1212 15:12:45.190163    3693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:12:45.190171    3693 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 15:12:45.255463    3693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422765.328630850
	
	I1212 15:12:45.255476    3693 fix.go:206] guest clock: 1702422765.328630850
	I1212 15:12:45.255482    3693 fix.go:219] Guest: 2023-12-12 15:12:45.32863085 -0800 PST Remote: 2023-12-12 15:12:45.189414 -0800 PST m=+12.655245333 (delta=139.21685ms)
	I1212 15:12:45.255501    3693 fix.go:190] guest clock delta is within tolerance: 139.21685ms
	I1212 15:12:45.255504    3693 start.go:83] releasing machines lock for "multinode-449000", held for 12.254551049s
	I1212 15:12:45.255522    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:45.255645    3693 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:12:45.255744    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:45.256049    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:45.256156    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:12:45.256232    3693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 15:12:45.256263    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:45.256281    3693 ssh_runner.go:195] Run: cat /version.json
	I1212 15:12:45.256292    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:12:45.256351    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:45.256394    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:12:45.256435    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:45.256470    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:12:45.256510    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:45.256556    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:12:45.256580    3693 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:12:45.256630    3693 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:12:45.291089    3693 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 15:12:45.291260    3693 ssh_runner.go:195] Run: systemctl --version
	I1212 15:12:45.294850    3693 command_runner.go:130] > systemd 247 (247)
	I1212 15:12:45.294880    3693 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 15:12:45.295134    3693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 15:12:45.352382    3693 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 15:12:45.353428    3693 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 15:12:45.353460    3693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 15:12:45.353540    3693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 15:12:45.365218    3693 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 15:12:45.365279    3693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 15:12:45.365290    3693 start.go:475] detecting cgroup driver to use...
	I1212 15:12:45.365393    3693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:12:45.376607    3693 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 15:12:45.376926    3693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 15:12:45.384072    3693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 15:12:45.391101    3693 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 15:12:45.391160    3693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 15:12:45.398281    3693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:12:45.405399    3693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 15:12:45.412569    3693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:12:45.419600    3693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 15:12:45.426803    3693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 15:12:45.433792    3693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 15:12:45.440018    3693 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 15:12:45.440208    3693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 15:12:45.446472    3693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:12:45.527753    3693 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 15:12:45.538920    3693 start.go:475] detecting cgroup driver to use...
	I1212 15:12:45.538993    3693 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 15:12:45.548686    3693 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 15:12:45.549278    3693 command_runner.go:130] > [Unit]
	I1212 15:12:45.549287    3693 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 15:12:45.549292    3693 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 15:12:45.549296    3693 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 15:12:45.549301    3693 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 15:12:45.549306    3693 command_runner.go:130] > StartLimitBurst=3
	I1212 15:12:45.549310    3693 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 15:12:45.549313    3693 command_runner.go:130] > [Service]
	I1212 15:12:45.549316    3693 command_runner.go:130] > Type=notify
	I1212 15:12:45.549325    3693 command_runner.go:130] > Restart=on-failure
	I1212 15:12:45.549332    3693 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 15:12:45.549340    3693 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 15:12:45.549347    3693 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 15:12:45.549353    3693 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 15:12:45.549358    3693 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 15:12:45.549365    3693 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 15:12:45.549370    3693 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 15:12:45.549378    3693 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 15:12:45.549385    3693 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 15:12:45.549388    3693 command_runner.go:130] > ExecStart=
	I1212 15:12:45.549402    3693 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I1212 15:12:45.549406    3693 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 15:12:45.549414    3693 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 15:12:45.549419    3693 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 15:12:45.549424    3693 command_runner.go:130] > LimitNOFILE=infinity
	I1212 15:12:45.549428    3693 command_runner.go:130] > LimitNPROC=infinity
	I1212 15:12:45.549433    3693 command_runner.go:130] > LimitCORE=infinity
	I1212 15:12:45.549451    3693 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 15:12:45.549457    3693 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 15:12:45.549461    3693 command_runner.go:130] > TasksMax=infinity
	I1212 15:12:45.549465    3693 command_runner.go:130] > TimeoutStartSec=0
	I1212 15:12:45.549470    3693 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 15:12:45.549474    3693 command_runner.go:130] > Delegate=yes
	I1212 15:12:45.549479    3693 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 15:12:45.549484    3693 command_runner.go:130] > KillMode=process
	I1212 15:12:45.549494    3693 command_runner.go:130] > [Install]
	I1212 15:12:45.549504    3693 command_runner.go:130] > WantedBy=multi-user.target
	I1212 15:12:45.549645    3693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:12:45.559463    3693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 15:12:45.573998    3693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:12:45.583080    3693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:12:45.591527    3693 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 15:12:45.672298    3693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:12:45.681937    3693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:12:45.693316    3693 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 15:12:45.693676    3693 ssh_runner.go:195] Run: which cri-dockerd
	I1212 15:12:45.695853    3693 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 15:12:45.696128    3693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 15:12:45.701798    3693 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 15:12:45.712521    3693 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 15:12:45.795172    3693 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 15:12:45.891573    3693 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 15:12:45.891667    3693 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 15:12:45.902881    3693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:12:45.986543    3693 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 15:12:47.248011    3693 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.261457123s)
	I1212 15:12:47.248079    3693 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:12:47.344481    3693 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 15:12:47.433519    3693 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:12:47.522113    3693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:12:47.608469    3693 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 15:12:47.622429    3693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:12:47.726460    3693 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 15:12:47.779760    3693 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 15:12:47.779844    3693 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 15:12:47.783254    3693 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 15:12:47.783267    3693 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 15:12:47.783272    3693 command_runner.go:130] > Device: 16h/22d	Inode: 860         Links: 1
	I1212 15:12:47.783278    3693 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 15:12:47.783284    3693 command_runner.go:130] > Access: 2023-12-12 23:12:47.811734961 +0000
	I1212 15:12:47.783288    3693 command_runner.go:130] > Modify: 2023-12-12 23:12:47.811734961 +0000
	I1212 15:12:47.783300    3693 command_runner.go:130] > Change: 2023-12-12 23:12:47.813734961 +0000
	I1212 15:12:47.783304    3693 command_runner.go:130] >  Birth: -
	I1212 15:12:47.783406    3693 start.go:543] Will wait 60s for crictl version
	I1212 15:12:47.783459    3693 ssh_runner.go:195] Run: which crictl
	I1212 15:12:47.785633    3693 command_runner.go:130] > /usr/bin/crictl
	I1212 15:12:47.785766    3693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 15:12:47.819139    3693 command_runner.go:130] > Version:  0.1.0
	I1212 15:12:47.819152    3693 command_runner.go:130] > RuntimeName:  docker
	I1212 15:12:47.819156    3693 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 15:12:47.819169    3693 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 15:12:47.820229    3693 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 15:12:47.820295    3693 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 15:12:47.836152    3693 command_runner.go:130] > 24.0.7
	I1212 15:12:47.837018    3693 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 15:12:47.852883    3693 command_runner.go:130] > 24.0.7
	I1212 15:12:47.895657    3693 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 15:12:47.895703    3693 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:12:47.896083    3693 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1212 15:12:47.900228    3693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 15:12:47.908648    3693 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:12:47.908729    3693 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 15:12:47.920669    3693 docker.go:671] Got preloaded images: 
	I1212 15:12:47.920681    3693 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 15:12:47.920729    3693 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 15:12:47.926404    3693 command_runner.go:139] > {"Repositories":{}}
	I1212 15:12:47.926627    3693 ssh_runner.go:195] Run: which lz4
	I1212 15:12:47.928834    3693 command_runner.go:130] > /usr/bin/lz4
	I1212 15:12:47.928986    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 15:12:47.929097    3693 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 15:12:47.931584    3693 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 15:12:47.931598    3693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 15:12:47.931613    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 15:12:49.355994    3693 docker.go:635] Took 1.426949 seconds to copy over tarball
	I1212 15:12:49.356054    3693 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 15:12:52.826392    3693 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.470347091s)
	I1212 15:12:52.826406    3693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 15:12:52.852711    3693 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 15:12:52.858590    3693 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 15:12:52.858732    3693 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 15:12:52.870354    3693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:12:52.952909    3693 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 15:12:54.290052    3693 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.337131751s)
	I1212 15:12:54.290146    3693 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 15:12:54.302833    3693 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 15:12:54.302850    3693 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 15:12:54.302854    3693 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 15:12:54.302859    3693 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 15:12:54.302863    3693 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 15:12:54.302867    3693 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 15:12:54.302871    3693 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 15:12:54.302877    3693 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 15:12:54.303341    3693 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 15:12:54.303359    3693 cache_images.go:84] Images are preloaded, skipping loading
	I1212 15:12:54.303448    3693 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 15:12:54.330500    3693 command_runner.go:130] > cgroupfs
	I1212 15:12:54.331002    3693 cni.go:84] Creating CNI manager for ""
	I1212 15:12:54.331012    3693 cni.go:136] 1 nodes found, recommending kindnet
	I1212 15:12:54.331032    3693 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 15:12:54.331052    3693 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-449000 NodeName:multinode-449000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 15:12:54.331152    3693 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-449000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 15:12:54.331210    3693 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-449000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 15:12:54.331265    3693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 15:12:54.337093    3693 command_runner.go:130] > kubeadm
	I1212 15:12:54.337101    3693 command_runner.go:130] > kubectl
	I1212 15:12:54.337104    3693 command_runner.go:130] > kubelet
	I1212 15:12:54.337217    3693 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 15:12:54.337268    3693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 15:12:54.342826    3693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1212 15:12:54.353835    3693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 15:12:54.365157    3693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1212 15:12:54.376284    3693 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I1212 15:12:54.378646    3693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 15:12:54.387242    3693 certs.go:56] Setting up /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000 for IP: 192.169.0.13
	I1212 15:12:54.387261    3693 certs.go:190] acquiring lock for shared ca certs: {Name:mkc116deb15cbfbe8939fd5907655f41e3f69c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:12:54.387432    3693 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key
	I1212 15:12:54.387501    3693 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key
	I1212 15:12:54.387553    3693 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key
	I1212 15:12:54.387566    3693 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt with IP's: []
	I1212 15:12:54.619525    3693 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt ...
	I1212 15:12:54.619537    3693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt: {Name:mk0807daa3515ee718ba11aabf57d3dac3262365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:12:54.619863    3693 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key ...
	I1212 15:12:54.619871    3693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key: {Name:mk357cbdd051f0d3acbbf8a7bace10c5b7261d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:12:54.620104    3693 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key.ff8d457b
	I1212 15:12:54.620121    3693 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt.ff8d457b with IP's: [192.169.0.13 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 15:12:54.723963    3693 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt.ff8d457b ...
	I1212 15:12:54.723972    3693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt.ff8d457b: {Name:mke0675663d98a37b625106744ffaab309b6de79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:12:54.724223    3693 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key.ff8d457b ...
	I1212 15:12:54.724231    3693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key.ff8d457b: {Name:mk27ff0a044b877fa07f0b6c04ba0bd26d1833f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:12:54.724428    3693 certs.go:337] copying /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt.ff8d457b -> /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt
	I1212 15:12:54.724596    3693 certs.go:341] copying /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key.ff8d457b -> /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key
	I1212 15:12:54.724778    3693 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key
	I1212 15:12:54.724792    3693 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.crt with IP's: []
	I1212 15:12:54.789576    3693 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.crt ...
	I1212 15:12:54.789586    3693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.crt: {Name:mk789e67ab3e58bc2ee893502a6792bceca82114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:12:54.789836    3693 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key ...
	I1212 15:12:54.789845    3693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key: {Name:mk391e479f8a009bbcf545605ae3d5ef24f8f15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:12:54.790052    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 15:12:54.790085    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 15:12:54.790106    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 15:12:54.790132    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 15:12:54.790151    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 15:12:54.790170    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 15:12:54.790188    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 15:12:54.790207    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 15:12:54.790308    3693 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem (1338 bytes)
	W1212 15:12:54.790359    3693 certs.go:433] ignoring /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720_empty.pem, impossibly tiny 0 bytes
	I1212 15:12:54.790370    3693 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 15:12:54.790417    3693 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem (1082 bytes)
	I1212 15:12:54.790460    3693 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem (1123 bytes)
	I1212 15:12:54.790500    3693 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem (1675 bytes)
	I1212 15:12:54.790588    3693 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:12:54.790633    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:12:54.790651    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem -> /usr/share/ca-certificates/1720.pem
	I1212 15:12:54.790668    3693 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> /usr/share/ca-certificates/17202.pem
	I1212 15:12:54.791104    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 15:12:54.807891    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 15:12:54.824176    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 15:12:54.840329    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 15:12:54.856481    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 15:12:54.872473    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 15:12:54.888386    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 15:12:54.904514    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 15:12:54.920661    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 15:12:54.936749    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem --> /usr/share/ca-certificates/1720.pem (1338 bytes)
	I1212 15:12:54.952824    3693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /usr/share/ca-certificates/17202.pem (1708 bytes)
	I1212 15:12:54.968785    3693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 15:12:54.979773    3693 ssh_runner.go:195] Run: openssl version
	I1212 15:12:54.983120    3693 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 15:12:54.983323    3693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 15:12:54.989677    3693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:12:54.992409    3693 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:12:54.992579    3693 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:12:54.992618    3693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:12:54.996029    3693 command_runner.go:130] > b5213941
	I1212 15:12:54.996275    3693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 15:12:55.002625    3693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1720.pem && ln -fs /usr/share/ca-certificates/1720.pem /etc/ssl/certs/1720.pem"
	I1212 15:12:55.009064    3693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1720.pem
	I1212 15:12:55.011779    3693 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:59 /usr/share/ca-certificates/1720.pem
	I1212 15:12:55.011977    3693 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:59 /usr/share/ca-certificates/1720.pem
	I1212 15:12:55.012015    3693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1720.pem
	I1212 15:12:55.015411    3693 command_runner.go:130] > 51391683
	I1212 15:12:55.015623    3693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1720.pem /etc/ssl/certs/51391683.0"
	I1212 15:12:55.022080    3693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17202.pem && ln -fs /usr/share/ca-certificates/17202.pem /etc/ssl/certs/17202.pem"
	I1212 15:12:55.028620    3693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17202.pem
	I1212 15:12:55.031348    3693 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:59 /usr/share/ca-certificates/17202.pem
	I1212 15:12:55.031432    3693 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:59 /usr/share/ca-certificates/17202.pem
	I1212 15:12:55.031469    3693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17202.pem
	I1212 15:12:55.034850    3693 command_runner.go:130] > 3ec20f2e
	I1212 15:12:55.035103    3693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17202.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 15:12:55.041439    3693 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 15:12:55.043898    3693 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 15:12:55.044067    3693 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 15:12:55.044109    3693 kubeadm.go:404] StartCluster: {Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:12:55.044201    3693 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 15:12:55.056110    3693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 15:12:55.061975    3693 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 15:12:55.061985    3693 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 15:12:55.061991    3693 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 15:12:55.062134    3693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 15:12:55.068025    3693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 15:12:55.073658    3693 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 15:12:55.073667    3693 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 15:12:55.073673    3693 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 15:12:55.073680    3693 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 15:12:55.073734    3693 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 15:12:55.073759    3693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 15:12:55.138108    3693 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 15:12:55.138111    3693 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 15:12:55.138165    3693 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 15:12:55.138175    3693 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 15:12:55.307427    3693 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 15:12:55.307428    3693 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 15:12:55.307535    3693 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 15:12:55.307539    3693 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 15:12:55.307615    3693 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 15:12:55.307623    3693 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 15:12:55.517632    3693 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 15:12:55.573708    3693 out.go:204]   - Generating certificates and keys ...
	I1212 15:12:55.517665    3693 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 15:12:55.573793    3693 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 15:12:55.573802    3693 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 15:12:55.573847    3693 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 15:12:55.573854    3693 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 15:12:55.643371    3693 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 15:12:55.643400    3693 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 15:12:55.761168    3693 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 15:12:55.761186    3693 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 15:12:55.811985    3693 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 15:12:55.811996    3693 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 15:12:56.126051    3693 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 15:12:56.126070    3693 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 15:12:56.229427    3693 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 15:12:56.229430    3693 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 15:12:56.229612    3693 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-449000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 15:12:56.229621    3693 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-449000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 15:12:56.416229    3693 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 15:12:56.416243    3693 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 15:12:56.416374    3693 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-449000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 15:12:56.416380    3693 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-449000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 15:12:56.597498    3693 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 15:12:56.597510    3693 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 15:12:56.797648    3693 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 15:12:56.797657    3693 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 15:12:56.880841    3693 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 15:12:56.880851    3693 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 15:12:56.881063    3693 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 15:12:56.881073    3693 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 15:12:57.013415    3693 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 15:12:57.013419    3693 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 15:12:57.270192    3693 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 15:12:57.270198    3693 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 15:12:57.478553    3693 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 15:12:57.478572    3693 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 15:12:57.644319    3693 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 15:12:57.644338    3693 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 15:12:57.644797    3693 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 15:12:57.644805    3693 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 15:12:57.646723    3693 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 15:12:57.646731    3693 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 15:12:57.668659    3693 out.go:204]   - Booting up control plane ...
	I1212 15:12:57.668731    3693 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 15:12:57.668738    3693 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 15:12:57.668857    3693 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 15:12:57.668864    3693 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 15:12:57.668918    3693 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 15:12:57.668926    3693 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 15:12:57.669014    3693 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 15:12:57.669021    3693 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 15:12:57.669104    3693 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 15:12:57.669111    3693 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 15:12:57.669147    3693 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 15:12:57.669156    3693 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 15:12:57.753936    3693 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 15:12:57.753939    3693 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 15:13:03.250275    3693 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.503926 seconds
	I1212 15:13:03.250284    3693 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.503926 seconds
	I1212 15:13:03.250383    3693 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 15:13:03.250400    3693 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 15:13:03.259592    3693 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 15:13:03.259597    3693 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 15:13:03.775383    3693 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 15:13:03.775400    3693 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 15:13:03.775546    3693 kubeadm.go:322] [mark-control-plane] Marking the node multinode-449000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 15:13:03.775558    3693 command_runner.go:130] > [mark-control-plane] Marking the node multinode-449000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 15:13:04.283288    3693 kubeadm.go:322] [bootstrap-token] Using token: zea8a0.kesx7bzv2fg19l81
	I1212 15:13:04.306083    3693 out.go:204]   - Configuring RBAC rules ...
	I1212 15:13:04.283314    3693 command_runner.go:130] > [bootstrap-token] Using token: zea8a0.kesx7bzv2fg19l81
	I1212 15:13:04.306257    3693 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 15:13:04.306265    3693 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 15:13:04.345082    3693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 15:13:04.345094    3693 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 15:13:04.350949    3693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 15:13:04.350959    3693 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 15:13:04.353129    3693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 15:13:04.353132    3693 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 15:13:04.355176    3693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 15:13:04.355183    3693 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 15:13:04.357494    3693 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 15:13:04.357505    3693 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 15:13:04.365079    3693 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 15:13:04.365092    3693 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 15:13:04.593649    3693 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 15:13:04.593668    3693 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 15:13:04.748650    3693 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 15:13:04.748659    3693 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 15:13:04.749338    3693 kubeadm.go:322] 
	I1212 15:13:04.749382    3693 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 15:13:04.749388    3693 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 15:13:04.749391    3693 kubeadm.go:322] 
	I1212 15:13:04.749468    3693 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 15:13:04.749476    3693 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 15:13:04.749484    3693 kubeadm.go:322] 
	I1212 15:13:04.749510    3693 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 15:13:04.749515    3693 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 15:13:04.749573    3693 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 15:13:04.749578    3693 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 15:13:04.749617    3693 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 15:13:04.749623    3693 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 15:13:04.749636    3693 kubeadm.go:322] 
	I1212 15:13:04.749681    3693 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 15:13:04.749695    3693 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 15:13:04.749704    3693 kubeadm.go:322] 
	I1212 15:13:04.749756    3693 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 15:13:04.749759    3693 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 15:13:04.749770    3693 kubeadm.go:322] 
	I1212 15:13:04.749831    3693 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 15:13:04.749840    3693 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 15:13:04.749909    3693 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 15:13:04.749920    3693 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 15:13:04.749979    3693 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 15:13:04.749989    3693 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 15:13:04.750007    3693 kubeadm.go:322] 
	I1212 15:13:04.750079    3693 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 15:13:04.750091    3693 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 15:13:04.750151    3693 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 15:13:04.750155    3693 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 15:13:04.750159    3693 kubeadm.go:322] 
	I1212 15:13:04.750224    3693 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zea8a0.kesx7bzv2fg19l81 \
	I1212 15:13:04.750230    3693 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token zea8a0.kesx7bzv2fg19l81 \
	I1212 15:13:04.750314    3693 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25d491fbe418ba59008b56e4443168fda1f3db5a6027e11eedddf6ca431378b5 \
	I1212 15:13:04.750321    3693 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:25d491fbe418ba59008b56e4443168fda1f3db5a6027e11eedddf6ca431378b5 \
	I1212 15:13:04.750336    3693 kubeadm.go:322] 	--control-plane 
	I1212 15:13:04.750341    3693 command_runner.go:130] > 	--control-plane 
	I1212 15:13:04.750346    3693 kubeadm.go:322] 
	I1212 15:13:04.750406    3693 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 15:13:04.750413    3693 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 15:13:04.750418    3693 kubeadm.go:322] 
	I1212 15:13:04.750488    3693 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zea8a0.kesx7bzv2fg19l81 \
	I1212 15:13:04.750494    3693 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zea8a0.kesx7bzv2fg19l81 \
	I1212 15:13:04.750577    3693 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25d491fbe418ba59008b56e4443168fda1f3db5a6027e11eedddf6ca431378b5 
	I1212 15:13:04.750589    3693 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:25d491fbe418ba59008b56e4443168fda1f3db5a6027e11eedddf6ca431378b5 
	I1212 15:13:04.750858    3693 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 15:13:04.750866    3693 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 15:13:04.750877    3693 cni.go:84] Creating CNI manager for ""
	I1212 15:13:04.750881    3693 cni.go:136] 1 nodes found, recommending kindnet
	I1212 15:13:04.773950    3693 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 15:13:04.848150    3693 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 15:13:04.853082    3693 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 15:13:04.853095    3693 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 15:13:04.853100    3693 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 15:13:04.853105    3693 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 15:13:04.853111    3693 command_runner.go:130] > Access: 2023-12-12 23:12:41.973734976 +0000
	I1212 15:13:04.853116    3693 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 15:13:04.853121    3693 command_runner.go:130] > Change: 2023-12-12 23:12:39.969062061 +0000
	I1212 15:13:04.853125    3693 command_runner.go:130] >  Birth: -
	I1212 15:13:04.853168    3693 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 15:13:04.853174    3693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 15:13:04.882757    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 15:13:05.444730    3693 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 15:13:05.444744    3693 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 15:13:05.444748    3693 command_runner.go:130] > serviceaccount/kindnet created
	I1212 15:13:05.444752    3693 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 15:13:05.444774    3693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 15:13:05.444841    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:05.444846    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=multinode-449000 minikube.k8s.io/updated_at=2023_12_12T15_13_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:05.452167    3693 command_runner.go:130] > -16
	I1212 15:13:05.452284    3693 ops.go:34] apiserver oom_adj: -16
	I1212 15:13:05.549046    3693 command_runner.go:130] > node/multinode-449000 labeled
	I1212 15:13:05.549083    3693 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 15:13:05.549178    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:05.611301    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:05.611378    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:05.714120    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:06.215415    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:06.276116    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:06.714625    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:06.789483    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:07.215126    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:07.278492    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:07.715338    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:07.781486    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:08.215093    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:08.285207    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:08.714388    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:08.777717    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:09.214369    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:09.275727    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:09.714365    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:09.795048    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:10.215780    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:10.278589    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:10.715533    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:10.800142    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:11.214322    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:11.282989    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:11.715128    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:11.776629    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:12.214724    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:12.286017    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:12.714735    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:12.781719    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:13.215471    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:13.273161    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:13.714391    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:13.785517    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:14.215091    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:14.273150    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:14.714262    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:14.798854    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:15.214616    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:15.284000    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:15.715303    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:15.780077    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:16.214331    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:16.318276    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:16.715263    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:16.801265    3693 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 15:13:17.214290    3693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:13:17.316897    3693 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 15:13:17.316908    3693 command_runner.go:130] > default   0         1s
	I1212 15:13:17.317018    3693 kubeadm.go:1088] duration metric: took 11.872313602s to wait for elevateKubeSystemPrivileges.
	I1212 15:13:17.317036    3693 kubeadm.go:406] StartCluster complete in 22.273084749s
	I1212 15:13:17.317049    3693 settings.go:142] acquiring lock: {Name:mka464ae20beabe0956367b7c096b2df64ddda96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:13:17.317131    3693 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:13:17.317634    3693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/kubeconfig: {Name:mk59d3fcca7c93e43d82a40f16bbb777946cd182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:13:17.317898    3693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 15:13:17.317937    3693 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 15:13:17.317984    3693 addons.go:69] Setting storage-provisioner=true in profile "multinode-449000"
	I1212 15:13:17.317987    3693 addons.go:69] Setting default-storageclass=true in profile "multinode-449000"
	I1212 15:13:17.317998    3693 addons.go:231] Setting addon storage-provisioner=true in "multinode-449000"
	I1212 15:13:17.318015    3693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-449000"
	I1212 15:13:17.318036    3693 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:13:17.318060    3693 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:13:17.318107    3693 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:13:17.318282    3693 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:17.318309    3693 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:17.318311    3693 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:17.318326    3693 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:17.318316    3693 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 15:13:17.321296    3693 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 15:13:17.321684    3693 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 15:13:17.321696    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:17.321704    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:17.321710    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:17.327169    3693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51292
	I1212 15:13:17.327512    3693 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:17.327667    3693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51294
	I1212 15:13:17.327857    3693 main.go:141] libmachine: Using API Version  1
	I1212 15:13:17.327873    3693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:17.327986    3693 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:17.328199    3693 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:17.328325    3693 main.go:141] libmachine: Using API Version  1
	I1212 15:13:17.328334    3693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:17.328468    3693 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 15:13:17.328478    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:17.328484    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:17.328489    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:17.328498    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:17.328503    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:17.328508    3693 round_trippers.go:580]     Content-Length: 291
	I1212 15:13:17.328514    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:17 GMT
	I1212 15:13:17.328519    3693 round_trippers.go:580]     Audit-Id: a326e03e-8e8a-4630-8292-8ec4fe4c92ad
	I1212 15:13:17.328551    3693 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f736b503-d037-4c88-b91e-8a6459d1e321","resourceVersion":"342","creationTimestamp":"2023-12-12T23:13:04Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 15:13:17.328600    3693 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:17.328684    3693 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:17.328729    3693 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:17.328736    3693 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:13:17.328852    3693 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:13:17.328954    3693 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3705
	I1212 15:13:17.329650    3693 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f736b503-d037-4c88-b91e-8a6459d1e321","resourceVersion":"342","creationTimestamp":"2023-12-12T23:13:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 15:13:17.329792    3693 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 15:13:17.329814    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:17.329822    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:17.329830    3693 round_trippers.go:473]     Content-Type: application/json
	I1212 15:13:17.329848    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:17.331043    3693 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:13:17.331263    3693 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 15:13:17.331488    3693 addons.go:231] Setting addon default-storageclass=true in "multinode-449000"
	I1212 15:13:17.331509    3693 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:13:17.331769    3693 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:17.331797    3693 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:17.337554    3693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51296
	I1212 15:13:17.337934    3693 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:17.338387    3693 main.go:141] libmachine: Using API Version  1
	I1212 15:13:17.338405    3693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:17.338618    3693 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:17.338715    3693 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:13:17.338827    3693 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:13:17.338883    3693 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3705
	I1212 15:13:17.339372    3693 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 15:13:17.339383    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:17.339389    3693 round_trippers.go:580]     Content-Length: 291
	I1212 15:13:17.339395    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:17 GMT
	I1212 15:13:17.339400    3693 round_trippers.go:580]     Audit-Id: 724dc4e7-404f-4ee4-83e8-fb29b09e2b42
	I1212 15:13:17.339404    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:17.339410    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:17.339414    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:17.339419    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:17.339557    3693 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f736b503-d037-4c88-b91e-8a6459d1e321","resourceVersion":"343","creationTimestamp":"2023-12-12T23:13:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 15:13:17.339691    3693 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 15:13:17.339702    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:17.339708    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:17.339714    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:17.339961    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:17.340023    3693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51298
	I1212 15:13:17.362938    3693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 15:13:17.340376    3693 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:17.348158    3693 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 15:13:17.384039    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:17.384048    3693 round_trippers.go:580]     Audit-Id: bb851fbf-30af-40a0-884c-78a520c56b1a
	I1212 15:13:17.384062    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:17.384067    3693 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 15:13:17.384080    3693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 15:13:17.384068    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:17.384096    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:17.384099    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:17.384106    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:17.384111    3693 round_trippers.go:580]     Content-Length: 291
	I1212 15:13:17.384116    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:17 GMT
	I1212 15:13:17.384131    3693 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f736b503-d037-4c88-b91e-8a6459d1e321","resourceVersion":"343","creationTimestamp":"2023-12-12T23:13:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 15:13:17.363419    3693 main.go:141] libmachine: Using API Version  1
	I1212 15:13:17.384175    3693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:17.384215    3693 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-449000" context rescaled to 1 replicas
	I1212 15:13:17.384251    3693 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:13:17.384277    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:17.421216    3693 out.go:177] * Verifying Kubernetes components...
	I1212 15:13:17.384409    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:17.384484    3693 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:17.421618    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:17.422056    3693 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:17.459118    3693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 15:13:17.459163    3693 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:17.459424    3693 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:13:17.469251    3693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51301
	I1212 15:13:17.469612    3693 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:17.470036    3693 main.go:141] libmachine: Using API Version  1
	I1212 15:13:17.470057    3693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:17.470271    3693 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:17.470378    3693 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:13:17.470477    3693 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:13:17.470542    3693 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3705
	I1212 15:13:17.471533    3693 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:17.471710    3693 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 15:13:17.471719    3693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 15:13:17.471730    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:17.471813    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:17.471893    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:17.472002    3693 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:17.472122    3693 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:13:17.475772    3693 command_runner.go:130] > apiVersion: v1
	I1212 15:13:17.475783    3693 command_runner.go:130] > data:
	I1212 15:13:17.475787    3693 command_runner.go:130] >   Corefile: |
	I1212 15:13:17.475792    3693 command_runner.go:130] >     .:53 {
	I1212 15:13:17.475796    3693 command_runner.go:130] >         errors
	I1212 15:13:17.475803    3693 command_runner.go:130] >         health {
	I1212 15:13:17.475807    3693 command_runner.go:130] >            lameduck 5s
	I1212 15:13:17.475811    3693 command_runner.go:130] >         }
	I1212 15:13:17.475814    3693 command_runner.go:130] >         ready
	I1212 15:13:17.475822    3693 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 15:13:17.475826    3693 command_runner.go:130] >            pods insecure
	I1212 15:13:17.475830    3693 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 15:13:17.475836    3693 command_runner.go:130] >            ttl 30
	I1212 15:13:17.475839    3693 command_runner.go:130] >         }
	I1212 15:13:17.475844    3693 command_runner.go:130] >         prometheus :9153
	I1212 15:13:17.475848    3693 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 15:13:17.475852    3693 command_runner.go:130] >            max_concurrent 1000
	I1212 15:13:17.475856    3693 command_runner.go:130] >         }
	I1212 15:13:17.475859    3693 command_runner.go:130] >         cache 30
	I1212 15:13:17.475863    3693 command_runner.go:130] >         loop
	I1212 15:13:17.475866    3693 command_runner.go:130] >         reload
	I1212 15:13:17.475876    3693 command_runner.go:130] >         loadbalance
	I1212 15:13:17.475880    3693 command_runner.go:130] >     }
	I1212 15:13:17.475884    3693 command_runner.go:130] > kind: ConfigMap
	I1212 15:13:17.475889    3693 command_runner.go:130] > metadata:
	I1212 15:13:17.475895    3693 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:13:04Z"
	I1212 15:13:17.475899    3693 command_runner.go:130] >   name: coredns
	I1212 15:13:17.475903    3693 command_runner.go:130] >   namespace: kube-system
	I1212 15:13:17.475907    3693 command_runner.go:130] >   resourceVersion: "231"
	I1212 15:13:17.475912    3693 command_runner.go:130] >   uid: f9bb7a70-2db5-4a8f-90d5-b8bc77095680
	I1212 15:13:17.476023    3693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 15:13:17.515818    3693 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:13:17.516099    3693 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 15:13:17.516309    3693 node_ready.go:35] waiting up to 6m0s for node "multinode-449000" to be "Ready" ...
	I1212 15:13:17.516370    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:17.516374    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:17.516381    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:17.516387    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:17.518559    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:17.518568    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:17.518573    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:17.518578    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:17 GMT
	I1212 15:13:17.518586    3693 round_trippers.go:580]     Audit-Id: 9b3dae2c-b124-4817-a592-919bd3b1038c
	I1212 15:13:17.518591    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:17.518596    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:17.518601    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:17.519296    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:17.519762    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:17.519770    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:17.519776    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:17.519781    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:17.525785    3693 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 15:13:17.525797    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:17.525803    3693 round_trippers.go:580]     Audit-Id: b6131687-fa24-496b-8fc4-d22636a34757
	I1212 15:13:17.525808    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:17.525813    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:17.525817    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:17.525822    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:17.525827    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:17 GMT
	I1212 15:13:17.526326    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:17.601551    3693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 15:13:17.639497    3693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 15:13:18.027763    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:18.027776    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:18.027783    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:18.027791    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:18.029668    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:18.029679    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:18.029684    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:18.029689    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:18.029693    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:18.029699    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:18.029703    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:18 GMT
	I1212 15:13:18.029708    3693 round_trippers.go:580]     Audit-Id: a2511a7f-348b-43f9-a012-5676f24a5e99
	I1212 15:13:18.029800    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:18.071439    3693 command_runner.go:130] > configmap/coredns replaced
	I1212 15:13:18.077205    3693 start.go:929] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1212 15:13:18.326833    3693 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 15:13:18.326848    3693 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 15:13:18.326854    3693 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 15:13:18.326875    3693 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 15:13:18.326880    3693 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 15:13:18.326884    3693 command_runner.go:130] > pod/storage-provisioner created
	I1212 15:13:18.326920    3693 main.go:141] libmachine: Making call to close driver server
	I1212 15:13:18.326930    3693 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:13:18.326931    3693 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 15:13:18.326960    3693 main.go:141] libmachine: Making call to close driver server
	I1212 15:13:18.326970    3693 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:13:18.327135    3693 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:13:18.327137    3693 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:13:18.327147    3693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:13:18.327146    3693 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:13:18.327150    3693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:13:18.327156    3693 main.go:141] libmachine: Making call to close driver server
	I1212 15:13:18.327178    3693 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:13:18.327180    3693 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:13:18.327163    3693 main.go:141] libmachine: Making call to close driver server
	I1212 15:13:18.327209    3693 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:13:18.327325    3693 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:13:18.327345    3693 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:13:18.327356    3693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:13:18.327409    3693 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:13:18.327411    3693 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:13:18.327422    3693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:13:18.327500    3693 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 15:13:18.327507    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:18.327516    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:18.327530    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:18.329645    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:18.329656    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:18.329662    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:18.329674    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:18.329679    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:18.329683    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:18.329688    3693 round_trippers.go:580]     Content-Length: 1273
	I1212 15:13:18.329692    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:18 GMT
	I1212 15:13:18.329696    3693 round_trippers.go:580]     Audit-Id: db7573df-fc0d-4ee3-9d8d-885facb61a3c
	I1212 15:13:18.329746    3693 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"377"},"items":[{"metadata":{"name":"standard","uid":"20fb0e5b-d511-4ab4-8113-6d7f1494ee7b","resourceVersion":"369","creationTimestamp":"2023-12-12T23:13:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:13:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 15:13:18.330010    3693 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"20fb0e5b-d511-4ab4-8113-6d7f1494ee7b","resourceVersion":"369","creationTimestamp":"2023-12-12T23:13:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:13:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 15:13:18.330041    3693 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 15:13:18.330050    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:18.330057    3693 round_trippers.go:473]     Content-Type: application/json
	I1212 15:13:18.330063    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:18.330068    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:18.331800    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:18.331814    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:18.331819    3693 round_trippers.go:580]     Content-Length: 1220
	I1212 15:13:18.331825    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:18 GMT
	I1212 15:13:18.331829    3693 round_trippers.go:580]     Audit-Id: ffa986b9-8d68-4d1b-b1b6-a58091494c0f
	I1212 15:13:18.331835    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:18.331840    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:18.331846    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:18.331851    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:18.331873    3693 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"20fb0e5b-d511-4ab4-8113-6d7f1494ee7b","resourceVersion":"369","creationTimestamp":"2023-12-12T23:13:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:13:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 15:13:18.331942    3693 main.go:141] libmachine: Making call to close driver server
	I1212 15:13:18.331950    3693 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:13:18.332092    3693 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:13:18.332101    3693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:13:18.332145    3693 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:13:18.354393    3693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 15:13:18.412387    3693 addons.go:502] enable addons completed in 1.094462454s: enabled=[storage-provisioner default-storageclass]
	I1212 15:13:18.527449    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:18.527469    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:18.527478    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:18.527488    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:18.529680    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:18.529691    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:18.529696    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:18 GMT
	I1212 15:13:18.529701    3693 round_trippers.go:580]     Audit-Id: 32c64cc0-1454-4a0d-9f84-63362857fea3
	I1212 15:13:18.529706    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:18.529711    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:18.529715    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:18.529720    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:18.529791    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:19.027844    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:19.027861    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:19.027867    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:19.027872    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:19.029408    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:19.029419    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:19.029427    3693 round_trippers.go:580]     Audit-Id: 6b932db8-330d-4875-ba36-b5f9f839acc7
	I1212 15:13:19.029434    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:19.029441    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:19.029446    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:19.029450    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:19.029456    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:19 GMT
	I1212 15:13:19.029541    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:19.527270    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:19.527296    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:19.527305    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:19.527313    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:19.529374    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:19.529384    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:19.529389    3693 round_trippers.go:580]     Audit-Id: c5b0e9a7-dd17-4b1a-885f-ae912514695b
	I1212 15:13:19.529394    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:19.529400    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:19.529409    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:19.529414    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:19.529418    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:19 GMT
	I1212 15:13:19.529572    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:19.529783    3693 node_ready.go:58] node "multinode-449000" has status "Ready":"False"
	I1212 15:13:20.027226    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:20.027240    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:20.027265    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:20.027274    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:20.028877    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:20.028887    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:20.028893    3693 round_trippers.go:580]     Audit-Id: 7fbb779f-f620-4943-9222-e071d4eb0d60
	I1212 15:13:20.028897    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:20.028902    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:20.028906    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:20.028911    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:20.028916    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:20 GMT
	I1212 15:13:20.029055    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:20.527229    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:20.527251    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:20.527263    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:20.527274    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:20.530511    3693 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:13:20.530525    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:20.530533    3693 round_trippers.go:580]     Audit-Id: 8756e10b-d9e8-4016-ac2c-b203a6dc0f92
	I1212 15:13:20.530539    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:20.530545    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:20.530552    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:20.530558    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:20.530570    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:20 GMT
	I1212 15:13:20.530737    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:21.027449    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:21.027467    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:21.027473    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:21.027485    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:21.029243    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:21.029259    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:21.029271    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:21.029280    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:21.029298    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:21 GMT
	I1212 15:13:21.029308    3693 round_trippers.go:580]     Audit-Id: b309a0a0-aeaa-4c26-a167-143aaac8d63f
	I1212 15:13:21.029313    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:21.029319    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:21.029409    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:21.528286    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:21.528321    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:21.528336    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:21.528346    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:21.530715    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:21.530733    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:21.530742    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:21.530749    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:21.530757    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:21.530763    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:21.530770    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:21 GMT
	I1212 15:13:21.530776    3693 round_trippers.go:580]     Audit-Id: b0a2e9fc-c9db-48e7-8fbe-bd75682a930c
	I1212 15:13:21.531221    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:21.531419    3693 node_ready.go:58] node "multinode-449000" has status "Ready":"False"
	I1212 15:13:22.026715    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:22.026726    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:22.026732    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:22.026737    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:22.028386    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:22.028425    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:22.028439    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:22.028446    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:22.028453    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:22 GMT
	I1212 15:13:22.028460    3693 round_trippers.go:580]     Audit-Id: 02fae500-bf84-41d4-8be6-b81389fd7b79
	I1212 15:13:22.028465    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:22.028472    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:22.028572    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:22.526641    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:22.526685    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:22.526692    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:22.526698    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:22.528299    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:22.528311    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:22.528317    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:22.528322    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:22 GMT
	I1212 15:13:22.528326    3693 round_trippers.go:580]     Audit-Id: 6e08eaa4-28ac-4bde-b906-047d2628dbbd
	I1212 15:13:22.528332    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:22.528336    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:22.528342    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:22.528433    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:23.026909    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:23.026924    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:23.026930    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:23.026939    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:23.028770    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:23.028781    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:23.028787    3693 round_trippers.go:580]     Audit-Id: cf86fcba-07bd-4976-810e-7faedfee6db8
	I1212 15:13:23.028791    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:23.028795    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:23.028800    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:23.028804    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:23.028810    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:23 GMT
	I1212 15:13:23.029019    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:23.526792    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:23.526807    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:23.526814    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:23.526819    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:23.528521    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:23.528533    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:23.528538    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:23.528543    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:23 GMT
	I1212 15:13:23.528548    3693 round_trippers.go:580]     Audit-Id: 3fc23f4e-91e7-4237-a8d9-09403688aaab
	I1212 15:13:23.528553    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:23.528558    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:23.528563    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:23.528637    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:24.027958    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:24.027980    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:24.027993    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:24.028003    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:24.030810    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:24.030827    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:24.030834    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:24.030842    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:24 GMT
	I1212 15:13:24.030848    3693 round_trippers.go:580]     Audit-Id: 6af5d1e1-08b0-4158-a45b-d191ceceb44a
	I1212 15:13:24.030854    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:24.030861    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:24.030867    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:24.030966    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:24.031224    3693 node_ready.go:58] node "multinode-449000" has status "Ready":"False"
	I1212 15:13:24.527715    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:24.527736    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:24.527748    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:24.527758    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:24.530654    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:24.530667    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:24.530690    3693 round_trippers.go:580]     Audit-Id: 73528e8a-62a9-4ef2-bc8f-d2dc155a94b0
	I1212 15:13:24.530700    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:24.530707    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:24.530717    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:24.530731    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:24.530740    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:24 GMT
	I1212 15:13:24.531157    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:25.027026    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:25.027043    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:25.027050    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:25.027055    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:25.028838    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:25.028847    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:25.028853    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:25.028859    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:25.028867    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:25.028874    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:25.028887    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:25 GMT
	I1212 15:13:25.028894    3693 round_trippers.go:580]     Audit-Id: 265e30d8-9f8c-4654-acae-b098f9ac5e9b
	I1212 15:13:25.029156    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:25.527081    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:25.527103    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:25.527120    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:25.527130    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:25.530536    3693 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:13:25.530559    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:25.530580    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:25.530589    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:25.530596    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:25.530602    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:25 GMT
	I1212 15:13:25.530608    3693 round_trippers.go:580]     Audit-Id: a43f4318-fe44-4505-9f64-e63d879e8476
	I1212 15:13:25.530617    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:25.530883    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:26.027424    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:26.027439    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:26.027446    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:26.027451    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:26.029124    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:26.029134    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:26.029140    3693 round_trippers.go:580]     Audit-Id: cea7d2fd-c993-47ec-9399-6c7425fe7c80
	I1212 15:13:26.029145    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:26.029157    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:26.029163    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:26.029167    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:26.029172    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:26 GMT
	I1212 15:13:26.029249    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:26.527011    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:26.527037    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:26.527051    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:26.527060    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:26.529431    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:26.529445    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:26.529453    3693 round_trippers.go:580]     Audit-Id: 66108480-a170-4be1-a4f0-23df37fbfc11
	I1212 15:13:26.529460    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:26.529467    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:26.529473    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:26.529479    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:26.529486    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:26 GMT
	I1212 15:13:26.529589    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:26.529850    3693 node_ready.go:58] node "multinode-449000" has status "Ready":"False"
	I1212 15:13:27.026929    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:27.026949    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:27.026962    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:27.026971    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:27.029978    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:27.029991    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:27.029999    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:27.030005    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:27.030011    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:27.030018    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:27.030029    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:27 GMT
	I1212 15:13:27.030036    3693 round_trippers.go:580]     Audit-Id: 1c0dc137-ec71-4161-b645-21c0854f9b30
	I1212 15:13:27.030342    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:27.527453    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:27.527469    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:27.527478    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:27.527488    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:27.529401    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:27.529411    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:27.529417    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:27.529422    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:27.529426    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:27 GMT
	I1212 15:13:27.529431    3693 round_trippers.go:580]     Audit-Id: 52aec2b7-9b6a-46fc-8682-96dc8282346c
	I1212 15:13:27.529435    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:27.529440    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:27.529612    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"306","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 15:13:28.027196    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:28.027210    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.027216    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.027221    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.033464    3693 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 15:13:28.033477    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.033482    3693 round_trippers.go:580]     Audit-Id: a65d1801-b6f4-4ea5-9dcd-de9fbf3ba21b
	I1212 15:13:28.033493    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.033500    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.033505    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.033512    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.033520    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.033600    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:28.033792    3693 node_ready.go:49] node "multinode-449000" has status "Ready":"True"
	I1212 15:13:28.033804    3693 node_ready.go:38] duration metric: took 10.517548611s waiting for node "multinode-449000" to be "Ready" ...
	I1212 15:13:28.033811    3693 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 15:13:28.033851    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:13:28.033856    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.033861    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.033866    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.035801    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:28.035811    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.035816    3693 round_trippers.go:580]     Audit-Id: 39612455-691c-4599-b0c6-586e1b216fe7
	I1212 15:13:28.035821    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.035825    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.035831    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.035835    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.035840    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.036244    3693 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"401","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53932 chars]
	I1212 15:13:28.038559    3693 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:28.038603    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:13:28.038608    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.038613    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.038619    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.040079    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:28.040093    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.040101    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.040108    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.040115    3693 round_trippers.go:580]     Audit-Id: 1165efd6-6fc4-46f2-b888-5be89b057760
	I1212 15:13:28.040123    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.040128    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.040133    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.040253    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"401","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 15:13:28.040504    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:28.040513    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.040518    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.040524    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.041673    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:28.041680    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.041685    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.041689    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.041694    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.041698    3693 round_trippers.go:580]     Audit-Id: 379d9a6a-8fbe-43bf-a088-b4abc3b9c77d
	I1212 15:13:28.041702    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.041710    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.041933    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:28.042121    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:13:28.042128    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.042134    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.042142    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.043266    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:28.043274    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.043280    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.043284    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.043290    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.043295    3693 round_trippers.go:580]     Audit-Id: d3ee730c-8e77-4883-8706-4b1aa88bdcff
	I1212 15:13:28.043301    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.043308    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.043376    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"401","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 15:13:28.043606    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:28.043613    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.043619    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.043624    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.044707    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:28.044715    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.044726    3693 round_trippers.go:580]     Audit-Id: 61b77f15-9278-4314-8601-12b14702beaf
	I1212 15:13:28.044733    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.044738    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.044745    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.044750    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.044757    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.044976    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:28.545854    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:13:28.545870    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.545878    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.545900    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.550985    3693 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 15:13:28.550999    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.551005    3693 round_trippers.go:580]     Audit-Id: 956ad410-7ccb-4347-b306-a9c2d5d30841
	I1212 15:13:28.551010    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.551015    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.551019    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.551024    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.551029    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.551134    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"401","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 15:13:28.551444    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:28.551452    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:28.551458    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:28.551463    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:28.553728    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:28.553740    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:28.553745    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:28.553750    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:28 GMT
	I1212 15:13:28.553756    3693 round_trippers.go:580]     Audit-Id: 58810aac-3f8b-4a2c-8275-f5ba47971c1e
	I1212 15:13:28.553772    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:28.553778    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:28.553783    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:28.553957    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:29.046304    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:13:29.046321    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:29.046340    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:29.046346    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:29.047995    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:29.048009    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:29.048017    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:29.048026    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:29.048037    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:29 GMT
	I1212 15:13:29.048042    3693 round_trippers.go:580]     Audit-Id: 00164462-207f-4bdb-ad12-ebbce0861968
	I1212 15:13:29.048047    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:29.048051    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:29.048293    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"401","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 15:13:29.048570    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:29.048586    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:29.048592    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:29.048598    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:29.049983    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:29.049992    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:29.049998    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:29.050003    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:29.050007    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:29.050012    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:29 GMT
	I1212 15:13:29.050017    3693 round_trippers.go:580]     Audit-Id: a73c2ce5-f7d9-483d-a1c3-62af423d06f6
	I1212 15:13:29.050022    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:29.050132    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:29.545771    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:13:29.545799    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:29.545812    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:29.545827    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:29.548341    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:29.548355    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:29.548363    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:29.548370    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:29.548381    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:29.548395    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:29.548402    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:29 GMT
	I1212 15:13:29.548409    3693 round_trippers.go:580]     Audit-Id: a6b9ada5-9963-4972-afcd-76945301e2d5
	I1212 15:13:29.548682    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"401","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 15:13:29.549055    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:29.549062    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:29.549068    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:29.549073    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:29.550490    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:29.550501    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:29.550509    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:29.550523    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:29.550531    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:29.550545    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:29 GMT
	I1212 15:13:29.550552    3693 round_trippers.go:580]     Audit-Id: 4246a0fe-64c5-4345-9b6b-8d50824be114
	I1212 15:13:29.550558    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:29.550676    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:30.046803    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:13:30.046830    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.046846    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.046857    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.049634    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:30.049648    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.049655    3693 round_trippers.go:580]     Audit-Id: b81ed7ad-dc58-454f-92e7-554ebbbac60c
	I1212 15:13:30.049662    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.049668    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.049674    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.049681    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.049687    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.049851    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"414","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I1212 15:13:30.050238    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.050248    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.050257    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.050265    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.051794    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.051803    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.051808    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.051813    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.051818    3693 round_trippers.go:580]     Audit-Id: a2ea292b-d77d-4a3b-8100-4a01f8ebc7d8
	I1212 15:13:30.051823    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.051828    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.051833    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.051906    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:30.052087    3693 pod_ready.go:92] pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace has status "Ready":"True"
	I1212 15:13:30.052096    3693 pod_ready.go:81] duration metric: took 2.013541321s waiting for pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.052103    3693 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.052134    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I1212 15:13:30.052139    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.052145    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.052150    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.053346    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.053356    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.053370    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.053379    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.053403    3693 round_trippers.go:580]     Audit-Id: 0313ec20-7ef1-470b-a170-389c30edda17
	I1212 15:13:30.053414    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.053422    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.053430    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.053513    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"193c5da5-9957-4b0c-ac1f-0883f287dc0d","resourceVersion":"389","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.mirror":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.seen":"2023-12-12T23:13:04.726760505Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1212 15:13:30.053736    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.053743    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.053749    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.053754    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.054833    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.054840    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.054845    3693 round_trippers.go:580]     Audit-Id: 3ad72176-02f4-4788-b6c8-4ac113af2a2f
	I1212 15:13:30.054849    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.054853    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.054859    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.054871    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.054881    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.055002    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:30.055175    3693 pod_ready.go:92] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:13:30.055183    3693 pod_ready.go:81] duration metric: took 3.07583ms waiting for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.055191    3693 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.055220    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I1212 15:13:30.055225    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.055230    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.055236    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.056409    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.056420    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.056425    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.056430    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.056434    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.056440    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.056444    3693 round_trippers.go:580]     Audit-Id: 3864e748-ea77-4429-824c-d7ec2d38c972
	I1212 15:13:30.056452    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.056608    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"d0340375-33dc-42b7-9b1d-6e66ff24d07b","resourceVersion":"391","creationTimestamp":"2023-12-12T23:13:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.mirror":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.seen":"2023-12-12T23:12:58.089999663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I1212 15:13:30.056836    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.056843    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.056849    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.056855    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.057866    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.057873    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.057877    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.057882    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.057886    3693 round_trippers.go:580]     Audit-Id: e0556672-bab6-4f56-bf2a-9481284a9b8e
	I1212 15:13:30.057892    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.057896    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.057901    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.058024    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:30.058180    3693 pod_ready.go:92] pod "kube-apiserver-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:13:30.058187    3693 pod_ready.go:81] duration metric: took 2.99178ms waiting for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.058193    3693 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.058218    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:13:30.058226    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.058232    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.058237    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.059430    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.059439    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.059444    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.059449    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.059454    3693 round_trippers.go:580]     Audit-Id: b81c455a-9b54-48d2-b123-bd48e1cd468f
	I1212 15:13:30.059459    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.059464    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.059472    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.059689    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"390","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I1212 15:13:30.059925    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.059932    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.059938    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.059943    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.061059    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.061071    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.061079    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.061085    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.061091    3693 round_trippers.go:580]     Audit-Id: 3177910d-dc7f-47a3-8780-3572cd00ed01
	I1212 15:13:30.061097    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.061105    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.061134    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.061298    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:30.061450    3693 pod_ready.go:92] pod "kube-controller-manager-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:13:30.061457    3693 pod_ready.go:81] duration metric: took 3.258723ms waiting for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.061463    3693 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxq22" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.228228    3693 request.go:629] Waited for 166.738829ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxq22
	I1212 15:13:30.228284    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxq22
	I1212 15:13:30.228291    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.228297    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.228303    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.230169    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.230182    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.230188    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.230193    3693 round_trippers.go:580]     Audit-Id: b2bb4374-5cc7-4450-8f6a-0fff391a3d5d
	I1212 15:13:30.230197    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.230202    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.230206    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.230210    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.230395    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxq22","generateName":"kube-proxy-","namespace":"kube-system","uid":"d330b0b4-7d3f-4386-a72d-cb235945c494","resourceVersion":"379","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"baac289e-d94d-427e-ad81-e4b30512f118","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"baac289e-d94d-427e-ad81-e4b30512f118\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I1212 15:13:30.428792    3693 request.go:629] Waited for 198.092768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.428867    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.428882    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.428894    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.428907    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.431608    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:30.431629    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.431654    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.431663    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.431669    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.431676    3693 round_trippers.go:580]     Audit-Id: adae4403-c21a-4f9c-a6f0-ab208a5e8eca
	I1212 15:13:30.431682    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.431688    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.431911    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:30.432169    3693 pod_ready.go:92] pod "kube-proxy-hxq22" in "kube-system" namespace has status "Ready":"True"
	I1212 15:13:30.432181    3693 pod_ready.go:81] duration metric: took 370.7151ms waiting for pod "kube-proxy-hxq22" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.432212    3693 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.628504    3693 request.go:629] Waited for 196.24182ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-449000
	I1212 15:13:30.628624    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-449000
	I1212 15:13:30.628638    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.628649    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.628659    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.631278    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:30.631292    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.631300    3693 round_trippers.go:580]     Audit-Id: 5ed42b1c-3718-4d8e-8bc0-a2d24fb48ea3
	I1212 15:13:30.631307    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.631313    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.631321    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.631345    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.631361    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.631451    3693 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-449000","namespace":"kube-system","uid":"6eda8382-3903-4ab4-96fb-afc4948c144b","resourceVersion":"388","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d002db3a6af46c2d870b0132a00cfc72","kubernetes.io/config.mirror":"d002db3a6af46c2d870b0132a00cfc72","kubernetes.io/config.seen":"2023-12-12T23:13:04.726764045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1212 15:13:30.827231    3693 request.go:629] Waited for 195.484059ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.827265    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:13:30.827269    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.827280    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.827287    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.828947    3693 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:13:30.828955    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.828960    3693 round_trippers.go:580]     Audit-Id: 0d61d345-3064-4bc5-b8e3-5e413edc2911
	I1212 15:13:30.828965    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.828970    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.828974    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.828979    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.828984    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.829142    3693 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 15:13:30.829330    3693 pod_ready.go:92] pod "kube-scheduler-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:13:30.829338    3693 pod_ready.go:81] duration metric: took 397.118915ms waiting for pod "kube-scheduler-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:13:30.829348    3693 pod_ready.go:38] duration metric: took 2.795543371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 15:13:30.829366    3693 api_server.go:52] waiting for apiserver process to appear ...
	I1212 15:13:30.829416    3693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:13:30.839857    3693 command_runner.go:130] > 1945
	I1212 15:13:30.839887    3693 api_server.go:72] duration metric: took 13.455706042s to wait for apiserver process to appear ...
	I1212 15:13:30.839894    3693 api_server.go:88] waiting for apiserver healthz status ...
	I1212 15:13:30.839909    3693 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:13:30.843890    3693 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 15:13:30.843928    3693 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I1212 15:13:30.843933    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:30.843939    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:30.843944    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:30.844579    3693 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 15:13:30.844588    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:30.844593    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:30.844600    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:30.844606    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:30.844611    3693 round_trippers.go:580]     Content-Length: 264
	I1212 15:13:30.844629    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:30 GMT
	I1212 15:13:30.844639    3693 round_trippers.go:580]     Audit-Id: 7a3f1a1f-289a-4fa0-bc04-c649643f09e5
	I1212 15:13:30.844644    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:30.844654    3693 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 15:13:30.844710    3693 api_server.go:141] control plane version: v1.28.4
	I1212 15:13:30.844719    3693 api_server.go:131] duration metric: took 4.821014ms to wait for apiserver health ...
	I1212 15:13:30.844728    3693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 15:13:31.027759    3693 request.go:629] Waited for 182.972977ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:13:31.027813    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:13:31.027824    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:31.027872    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:31.027883    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:31.031524    3693 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:13:31.031540    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:31.031548    3693 round_trippers.go:580]     Audit-Id: e7066dca-39d9-4844-91bc-7c84b5f1d444
	I1212 15:13:31.031554    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:31.031560    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:31.031569    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:31.031576    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:31.031584    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:31 GMT
	I1212 15:13:31.032227    3693 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"414","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I1212 15:13:31.033517    3693 system_pods.go:59] 8 kube-system pods found
	I1212 15:13:31.033531    3693 system_pods.go:61] "coredns-5dd5756b68-gbw2q" [09d20e99-6d1a-46d5-858f-71585ab9e532] Running
	I1212 15:13:31.033545    3693 system_pods.go:61] "etcd-multinode-449000" [193c5da5-9957-4b0c-ac1f-0883f287dc0d] Running
	I1212 15:13:31.033549    3693 system_pods.go:61] "kindnet-zkv5v" [92e2a49a-0055-4ae7-a167-fb51b4275183] Running
	I1212 15:13:31.033554    3693 system_pods.go:61] "kube-apiserver-multinode-449000" [d0340375-33dc-42b7-9b1d-6e66ff24d07b] Running
	I1212 15:13:31.033564    3693 system_pods.go:61] "kube-controller-manager-multinode-449000" [3cdec7d9-450b-47be-b93b-a5f3985415fa] Running
	I1212 15:13:31.033570    3693 system_pods.go:61] "kube-proxy-hxq22" [d330b0b4-7d3f-4386-a72d-cb235945c494] Running
	I1212 15:13:31.033576    3693 system_pods.go:61] "kube-scheduler-multinode-449000" [6eda8382-3903-4ab4-96fb-afc4948c144b] Running
	I1212 15:13:31.033579    3693 system_pods.go:61] "storage-provisioner" [11d647a8-b7f7-411a-b861-f3d109085770] Running
	I1212 15:13:31.033583    3693 system_pods.go:74] duration metric: took 188.852447ms to wait for pod list to return data ...
	I1212 15:13:31.033589    3693 default_sa.go:34] waiting for default service account to be created ...
	I1212 15:13:31.229242    3693 request.go:629] Waited for 195.608131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 15:13:31.229375    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 15:13:31.229415    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:31.229431    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:31.229442    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:31.232342    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:31.232358    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:31.232365    3693 round_trippers.go:580]     Audit-Id: 11e01750-6986-4f9f-99bd-492df4225d0d
	I1212 15:13:31.232372    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:31.232379    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:31.232385    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:31.232392    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:31.232411    3693 round_trippers.go:580]     Content-Length: 261
	I1212 15:13:31.232419    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:31 GMT
	I1212 15:13:31.232435    3693 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2237e2f6-7ac7-4dd4-a02d-49acbeab0757","resourceVersion":"309","creationTimestamp":"2023-12-12T23:13:16Z"}}]}
	I1212 15:13:31.232592    3693 default_sa.go:45] found service account: "default"
	I1212 15:13:31.232604    3693 default_sa.go:55] duration metric: took 199.011672ms for default service account to be created ...
	I1212 15:13:31.232611    3693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 15:13:31.427830    3693 request.go:629] Waited for 195.126391ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:13:31.427889    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:13:31.427903    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:31.427918    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:31.427929    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:31.431593    3693 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:13:31.431611    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:31.431621    3693 round_trippers.go:580]     Audit-Id: 559f005e-403c-40dc-97ef-8f14b24c9c2c
	I1212 15:13:31.431632    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:31.431641    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:31.431664    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:31.431673    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:31.431702    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:31 GMT
	I1212 15:13:31.432193    3693 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"414","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I1212 15:13:31.433453    3693 system_pods.go:86] 8 kube-system pods found
	I1212 15:13:31.433464    3693 system_pods.go:89] "coredns-5dd5756b68-gbw2q" [09d20e99-6d1a-46d5-858f-71585ab9e532] Running
	I1212 15:13:31.433469    3693 system_pods.go:89] "etcd-multinode-449000" [193c5da5-9957-4b0c-ac1f-0883f287dc0d] Running
	I1212 15:13:31.433472    3693 system_pods.go:89] "kindnet-zkv5v" [92e2a49a-0055-4ae7-a167-fb51b4275183] Running
	I1212 15:13:31.433476    3693 system_pods.go:89] "kube-apiserver-multinode-449000" [d0340375-33dc-42b7-9b1d-6e66ff24d07b] Running
	I1212 15:13:31.433481    3693 system_pods.go:89] "kube-controller-manager-multinode-449000" [3cdec7d9-450b-47be-b93b-a5f3985415fa] Running
	I1212 15:13:31.433484    3693 system_pods.go:89] "kube-proxy-hxq22" [d330b0b4-7d3f-4386-a72d-cb235945c494] Running
	I1212 15:13:31.433488    3693 system_pods.go:89] "kube-scheduler-multinode-449000" [6eda8382-3903-4ab4-96fb-afc4948c144b] Running
	I1212 15:13:31.433492    3693 system_pods.go:89] "storage-provisioner" [11d647a8-b7f7-411a-b861-f3d109085770] Running
	I1212 15:13:31.433496    3693 system_pods.go:126] duration metric: took 200.883058ms to wait for k8s-apps to be running ...
	I1212 15:13:31.433501    3693 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 15:13:31.433552    3693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 15:13:31.442180    3693 system_svc.go:56] duration metric: took 8.67505ms WaitForService to wait for kubelet.
	I1212 15:13:31.442191    3693 kubeadm.go:581] duration metric: took 14.058015329s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 15:13:31.442202    3693 node_conditions.go:102] verifying NodePressure condition ...
	I1212 15:13:31.627236    3693 request.go:629] Waited for 184.986807ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I1212 15:13:31.627308    3693 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I1212 15:13:31.627317    3693 round_trippers.go:469] Request Headers:
	I1212 15:13:31.627330    3693 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:13:31.627342    3693 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:13:31.629890    3693 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:13:31.629908    3693 round_trippers.go:577] Response Headers:
	I1212 15:13:31.629916    3693 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:13:31.629930    3693 round_trippers.go:580]     Content-Type: application/json
	I1212 15:13:31.629939    3693 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:13:31.629945    3693 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:13:31.629952    3693 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:13:31 GMT
	I1212 15:13:31.629958    3693 round_trippers.go:580]     Audit-Id: 03ddbce0-a8ee-456d-b259-14c1f0a28598
	I1212 15:13:31.630091    3693 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"395","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4834 chars]
	I1212 15:13:31.630341    3693 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 15:13:31.630358    3693 node_conditions.go:123] node cpu capacity is 2
	I1212 15:13:31.630368    3693 node_conditions.go:105] duration metric: took 188.164533ms to run NodePressure ...
	I1212 15:13:31.630376    3693 start.go:228] waiting for startup goroutines ...
	I1212 15:13:31.630381    3693 start.go:233] waiting for cluster config update ...
	I1212 15:13:31.630390    3693 start.go:242] writing updated cluster config ...
	I1212 15:13:31.630706    3693 ssh_runner.go:195] Run: rm -f paused
	I1212 15:13:31.669375    3693 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1212 15:13:31.712455    3693 out.go:177] * Done! kubectl is now configured to use "multinode-449000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:40 UTC, ends at Tue 2023-12-12 23:13:33 UTC. --
	Dec 12 23:13:17 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:17.884959646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:20 multinode-449000 cri-dockerd[1073]: time="2023-12-12T23:13:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/58468ea0d336573bd784a995cb21268ed8e49c862ddac39dd141cf1c560b5c34/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:13:23 multinode-449000 cri-dockerd[1073]: time="2023-12-12T23:13:23Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Dec 12 23:13:23 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:23.389442911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:13:23 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:23.389474798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:23 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:23.389492177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:13:23 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:23.389501001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.329290658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.330503918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.330682917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.330744194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.335844818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.335878515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.335890146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.335898250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 cri-dockerd[1073]: time="2023-12-12T23:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d7e822b848fcb73ba5773944be45fb9e5b045a727973b9d952e6492de5c76c8/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.679027903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.679460903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.679531603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.679693891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 cri-dockerd[1073]: time="2023-12-12T23:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29a2e0536a84ab01b79c79ff03f160a192a5cd43b0fb19c150f961468db844dd/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.809976154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.810908729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.810947431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:13:28 multinode-449000 dockerd[1186]: time="2023-12-12T23:13:28.810958631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	95bc5fcd783f5       ead0a4a53df89                                                                              5 seconds ago       Running             coredns                   0                   29a2e0536a84a       coredns-5dd5756b68-gbw2q
	349aceac4c902       6e38f40d628db                                                                              5 seconds ago       Running             storage-provisioner       0                   9d7e822b848fc       storage-provisioner
	58bbe956bbc01       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   10 seconds ago      Running             kindnet-cni               0                   58468ea0d3365       kindnet-zkv5v
	bc270a1f54f31       83f6cc407eed8                                                                              16 seconds ago      Running             kube-proxy                0                   8189af807d9f1       kube-proxy-hxq22
	f52a90b7997c0       e3db313c6dbc0                                                                              34 seconds ago      Running             kube-scheduler            0                   4a6892d4d8341       kube-scheduler-multinode-449000
	cbf4f71244550       73deb9a3f7025                                                                              34 seconds ago      Running             etcd                      0                   de90edd09b0ec       etcd-multinode-449000
	d57c6b9df1bf2       7fe0e6f37db33                                                                              34 seconds ago      Running             kube-apiserver            0                   e22fa4a926f7b       kube-apiserver-multinode-449000
	a65940e255b01       d058aa5ab969c                                                                              34 seconds ago      Running             kube-controller-manager   0                   e84049d10a454       kube-controller-manager-multinode-449000
	
	* 
	* ==> coredns [95bc5fcd783f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35091 - 44462 "HINFO IN 6377447879366584547.718696205685487622. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.013431538s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-449000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-449000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-449000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T15_13_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-449000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:13:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:13:27 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:13:27 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:13:27 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:13:27 +0000   Tue, 12 Dec 2023 23:13:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-449000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb37e330f5b9443c8e5e898060e544a9
	  System UUID:                9fde11ee-0000-0000-8111-f01898ef957c
	  Boot ID:                    d3ea05a9-ae4a-4962-87d7-3f212ad5cd37
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gbw2q                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16s
	  kube-system                 etcd-multinode-449000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29s
	  kube-system                 kindnet-zkv5v                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16s
	  kube-system                 kube-apiserver-multinode-449000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-controller-manager-multinode-449000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-proxy-hxq22                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  kube-system                 kube-scheduler-multinode-449000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17s   node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	  Normal  NodeReady                6s    kubelet          Node multinode-449000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.288646] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.038754] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.897459] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.834163] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.093786] systemd-fstab-generator[525]: Ignoring "noauto" for root device
	[  +0.664259] systemd-fstab-generator[737]: Ignoring "noauto" for root device
	[  +0.268983] systemd-fstab-generator[775]: Ignoring "noauto" for root device
	[  +0.090021] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.100928] systemd-fstab-generator[799]: Ignoring "noauto" for root device
	[  +1.275114] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.076537] systemd-fstab-generator[963]: Ignoring "noauto" for root device
	[  +0.094137] systemd-fstab-generator[998]: Ignoring "noauto" for root device
	[  +0.093978] systemd-fstab-generator[1009]: Ignoring "noauto" for root device
	[  +0.082710] systemd-fstab-generator[1020]: Ignoring "noauto" for root device
	[  +0.110276] systemd-fstab-generator[1041]: Ignoring "noauto" for root device
	[  +5.237461] systemd-fstab-generator[1171]: Ignoring "noauto" for root device
	[  +4.794573] systemd-fstab-generator[1551]: Ignoring "noauto" for root device
	[Dec12 23:13] systemd-fstab-generator[2394]: Ignoring "noauto" for root device
	[ +13.587099] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.747591] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [cbf4f7124455] <==
	* {"level":"info","ts":"2023-12-12T23:12:59.825451Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T23:12:59.825562Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:12:59.825609Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:12:59.828983Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:12:59.82904Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:13:00.593978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:13:00.594022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:13:00.594033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2023-12-12T23:13:00.594042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594813Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-449000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:13:00.596408Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:13:00.597065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:13:00.597176Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.597273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:13:00.601498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T23:13:00.601865Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:13:00.601875Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:13:00.623742Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.623931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.624046Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:17.553066Z","caller":"traceutil/trace.go:171","msg":"trace[473893054] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"111.978423ms","start":"2023-12-12T23:13:17.440922Z","end":"2023-12-12T23:13:17.5529Z","steps":["trace[473893054] 'process raft request'  (duration: 37.599328ms)","trace[473893054] 'compare'  (duration: 74.277806ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:13:17.553742Z","caller":"traceutil/trace.go:171","msg":"trace[91434881] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"111.636257ms","start":"2023-12-12T23:13:17.442093Z","end":"2023-12-12T23:13:17.553729Z","steps":["trace[91434881] 'process raft request'  (duration: 111.32696ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:13:33 up 1 min,  0 users,  load average: 1.41, 0.40, 0.14
	Linux multinode-449000 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58bbe956bbc0] <==
	* I1212 23:13:23.520861       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 23:13:23.520913       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 23:13:23.521005       1 main.go:116] setting mtu 1500 for CNI 
	I1212 23:13:23.521018       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 23:13:23.521036       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 23:13:23.724964       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:13:23.725050       1 main.go:227] handling current node
	I1212 23:13:33.727761       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:13:33.727777       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d57c6b9df1bf] <==
	* I1212 23:13:01.638626       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 23:13:01.638818       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:13:01.638946       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:13:01.640055       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:13:01.644736       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:13:01.645888       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:13:01.646061       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:13:01.646127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:13:01.646173       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:13:01.661639       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:13:02.539735       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:13:02.542612       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:13:02.542620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:13:02.886599       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:13:02.913735       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:13:02.950428       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:13:02.955449       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I1212 23:13:02.956465       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:13:02.959973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:13:03.579732       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:13:04.639287       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:13:04.680224       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:13:04.687003       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:13:17.131606       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1212 23:13:17.330848       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [a65940e255b0] <==
	* I1212 23:13:16.536339       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:13:16.580270       1 shared_informer.go:318] Caches are synced for deployment
	I1212 23:13:16.583892       1 shared_informer.go:318] Caches are synced for disruption
	I1212 23:13:16.584993       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:13:16.625708       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1212 23:13:16.965091       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:13:16.991675       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:13:16.991709       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:13:17.139253       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zkv5v"
	I1212 23:13:17.141698       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hxq22"
	I1212 23:13:17.333986       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 23:13:17.557309       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pk47r"
	I1212 23:13:17.557360       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 23:13:17.569686       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gbw2q"
	I1212 23:13:17.589493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="255.869106ms"
	I1212 23:13:17.604752       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pk47r"
	I1212 23:13:17.611415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.817254ms"
	I1212 23:13:17.624419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.85131ms"
	I1212 23:13:17.624716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.156µs"
	I1212 23:13:27.969254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.807µs"
	I1212 23:13:27.989675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.171µs"
	I1212 23:13:29.737447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.07µs"
	I1212 23:13:29.778766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.904788ms"
	I1212 23:13:29.778912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.956µs"
	I1212 23:13:31.438926       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [bc270a1f54f3] <==
	* I1212 23:13:18.012684       1 server_others.go:69] "Using iptables proxy"
	I1212 23:13:18.037892       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 23:13:18.072494       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:13:18.072509       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:13:18.074981       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:13:18.075043       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:13:18.075202       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:13:18.075209       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:13:18.076295       1 config.go:188] "Starting service config controller"
	I1212 23:13:18.076303       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:13:18.076315       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:13:18.076318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:13:18.076333       1 config.go:315] "Starting node config controller"
	I1212 23:13:18.076335       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:13:18.177081       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:13:18.177098       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:13:18.177117       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [f52a90b7997c] <==
	* W1212 23:13:01.622493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:13:01.622557       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:13:01.627703       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:13:01.627759       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:13:01.627882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:13:01.627969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:13:01.628097       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:13:01.628146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:13:01.628239       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:13:01.628286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:13:01.628384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:13:01.628478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 23:13:02.458336       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:13:02.458362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:13:02.467319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:13:02.467352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:13:02.496299       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:13:02.496382       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:13:02.572595       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:13:02.572751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 23:13:02.707713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:13:02.707895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:13:02.722617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:13:02.722657       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:13:04.511351       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:40 UTC, ends at Tue 2023-12-12 23:13:34 UTC. --
	Dec 12 23:13:16 multinode-449000 kubelet[2408]: I1212 23:13:16.487801    2408 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.146660    2408 topology_manager.go:215] "Topology Admit Handler" podUID="92e2a49a-0055-4ae7-a167-fb51b4275183" podNamespace="kube-system" podName="kindnet-zkv5v"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.149643    2408 topology_manager.go:215] "Topology Admit Handler" podUID="d330b0b4-7d3f-4386-a72d-cb235945c494" podNamespace="kube-system" podName="kube-proxy-hxq22"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237278    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6m8b\" (UniqueName: \"kubernetes.io/projected/92e2a49a-0055-4ae7-a167-fb51b4275183-kube-api-access-t6m8b\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237311    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-cni-cfg\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237329    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-xtables-lock\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237346    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d330b0b4-7d3f-4386-a72d-cb235945c494-lib-modules\") pod \"kube-proxy-hxq22\" (UID: \"d330b0b4-7d3f-4386-a72d-cb235945c494\") " pod="kube-system/kube-proxy-hxq22"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237364    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d330b0b4-7d3f-4386-a72d-cb235945c494-xtables-lock\") pod \"kube-proxy-hxq22\" (UID: \"d330b0b4-7d3f-4386-a72d-cb235945c494\") " pod="kube-system/kube-proxy-hxq22"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237380    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-lib-modules\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237392    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d330b0b4-7d3f-4386-a72d-cb235945c494-kube-proxy\") pod \"kube-proxy-hxq22\" (UID: \"d330b0b4-7d3f-4386-a72d-cb235945c494\") " pod="kube-system/kube-proxy-hxq22"
	Dec 12 23:13:17 multinode-449000 kubelet[2408]: I1212 23:13:17.237409    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phfqr\" (UniqueName: \"kubernetes.io/projected/d330b0b4-7d3f-4386-a72d-cb235945c494-kube-api-access-phfqr\") pod \"kube-proxy-hxq22\" (UID: \"d330b0b4-7d3f-4386-a72d-cb235945c494\") " pod="kube-system/kube-proxy-hxq22"
	Dec 12 23:13:20 multinode-449000 kubelet[2408]: I1212 23:13:20.242777    2408 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58468ea0d336573bd784a995cb21268ed8e49c862ddac39dd141cf1c560b5c34"
	Dec 12 23:13:21 multinode-449000 kubelet[2408]: I1212 23:13:21.258825    2408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hxq22" podStartSLOduration=4.2588004250000004 podCreationTimestamp="2023-12-12 23:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:13:21.257986401 +0000 UTC m=+16.642623565" watchObservedRunningTime="2023-12-12 23:13:21.258800425 +0000 UTC m=+16.643437591"
	Dec 12 23:13:24 multinode-449000 kubelet[2408]: I1212 23:13:24.809084    2408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zkv5v" podStartSLOduration=4.720143813 podCreationTimestamp="2023-12-12 23:13:17 +0000 UTC" firstStartedPulling="2023-12-12 23:13:20.245242791 +0000 UTC m=+15.629879948" lastFinishedPulling="2023-12-12 23:13:23.334156483 +0000 UTC m=+18.718793640" observedRunningTime="2023-12-12 23:13:24.290622141 +0000 UTC m=+19.675259301" watchObservedRunningTime="2023-12-12 23:13:24.809057505 +0000 UTC m=+20.193694665"
	Dec 12 23:13:27 multinode-449000 kubelet[2408]: I1212 23:13:27.953101    2408 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 23:13:27 multinode-449000 kubelet[2408]: I1212 23:13:27.969740    2408 topology_manager.go:215] "Topology Admit Handler" podUID="09d20e99-6d1a-46d5-858f-71585ab9e532" podNamespace="kube-system" podName="coredns-5dd5756b68-gbw2q"
	Dec 12 23:13:27 multinode-449000 kubelet[2408]: I1212 23:13:27.969824    2408 topology_manager.go:215] "Topology Admit Handler" podUID="11d647a8-b7f7-411a-b861-f3d109085770" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 23:13:28 multinode-449000 kubelet[2408]: I1212 23:13:28.011232    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2gm\" (UniqueName: \"kubernetes.io/projected/09d20e99-6d1a-46d5-858f-71585ab9e532-kube-api-access-5s2gm\") pod \"coredns-5dd5756b68-gbw2q\" (UID: \"09d20e99-6d1a-46d5-858f-71585ab9e532\") " pod="kube-system/coredns-5dd5756b68-gbw2q"
	Dec 12 23:13:28 multinode-449000 kubelet[2408]: I1212 23:13:28.011507    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjkn\" (UniqueName: \"kubernetes.io/projected/11d647a8-b7f7-411a-b861-f3d109085770-kube-api-access-gbjkn\") pod \"storage-provisioner\" (UID: \"11d647a8-b7f7-411a-b861-f3d109085770\") " pod="kube-system/storage-provisioner"
	Dec 12 23:13:28 multinode-449000 kubelet[2408]: I1212 23:13:28.011784    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume\") pod \"coredns-5dd5756b68-gbw2q\" (UID: \"09d20e99-6d1a-46d5-858f-71585ab9e532\") " pod="kube-system/coredns-5dd5756b68-gbw2q"
	Dec 12 23:13:28 multinode-449000 kubelet[2408]: I1212 23:13:28.011939    2408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/11d647a8-b7f7-411a-b861-f3d109085770-tmp\") pod \"storage-provisioner\" (UID: \"11d647a8-b7f7-411a-b861-f3d109085770\") " pod="kube-system/storage-provisioner"
	Dec 12 23:13:28 multinode-449000 kubelet[2408]: I1212 23:13:28.637957    2408 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d7e822b848fcb73ba5773944be45fb9e5b045a727973b9d952e6492de5c76c8"
	Dec 12 23:13:28 multinode-449000 kubelet[2408]: I1212 23:13:28.701355    2408 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a2e0536a84ab01b79c79ff03f160a192a5cd43b0fb19c150f961468db844dd"
	Dec 12 23:13:29 multinode-449000 kubelet[2408]: I1212 23:13:29.723679    2408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.723652116 podCreationTimestamp="2023-12-12 23:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:13:29.723649311 +0000 UTC m=+25.108286476" watchObservedRunningTime="2023-12-12 23:13:29.723652116 +0000 UTC m=+25.108289280"
	Dec 12 23:13:29 multinode-449000 kubelet[2408]: I1212 23:13:29.761864    2408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gbw2q" podStartSLOduration=12.761840165 podCreationTimestamp="2023-12-12 23:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:13:29.746782023 +0000 UTC m=+25.131419208" watchObservedRunningTime="2023-12-12 23:13:29.761840165 +0000 UTC m=+25.146477330"
	
	* 
	* ==> storage-provisioner [349aceac4c90] <==
	* I1212 23:13:28.776618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:13:28.782292       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:13:28.782347       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:13:28.787077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:13:28.787616       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-449000_bf43f63a-cdfb-4d50-832d-d0ae8d0a0d1a!
	I1212 23:13:28.789693       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3abdb08b-1824-4529-8878-e42e5ba065dd", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-449000_bf43f63a-cdfb-4d50-832d-d0ae8d0a0d1a became leader
	I1212 23:13:28.888957       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-449000_bf43f63a-cdfb-4d50-832d-d0ae8d0a0d1a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-449000 -n multinode-449000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-449000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (8.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 stop: (8.242823199s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status: exit status 7 (68.759525ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr: exit status 7 (67.381676ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:13:43.194994    3765 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:13:43.195289    3765 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:13:43.195294    3765 out.go:309] Setting ErrFile to fd 2...
	I1212 15:13:43.195299    3765 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:13:43.195474    3765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:13:43.195652    3765 out.go:303] Setting JSON to false
	I1212 15:13:43.195674    3765 mustload.go:65] Loading cluster: multinode-449000
	I1212 15:13:43.195722    3765 notify.go:220] Checking for updates...
	I1212 15:13:43.195968    3765 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:13:43.195981    3765 status.go:255] checking status of multinode-449000 ...
	I1212 15:13:43.196381    3765 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:43.196431    3765 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:43.204764    3765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51352
	I1212 15:13:43.205091    3765 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:43.205537    3765 main.go:141] libmachine: Using API Version  1
	I1212 15:13:43.205550    3765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:43.205747    3765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:43.205859    3765 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:13:43.205949    3765 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:13:43.206014    3765 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3705
	I1212 15:13:43.206924    3765 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 3705 missing from process table
	I1212 15:13:43.206976    3765 status.go:330] multinode-449000 host status = "Stopped" (err=<nil>)
	I1212 15:13:43.206984    3765 status.go:343] host is not running, skipping remaining checks
	I1212 15:13:43.206990    3765 status.go:257] multinode-449000 status: &{Name:multinode-449000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr": multinode-449000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr": multinode-449000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000: exit status 7 (68.029753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (8.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1212 15:14:04.202180    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (46.512165571s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
multinode_test.go:394: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr": 
-- stdout --
	multinode-449000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:14:29.843638    3806 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:14:29.843931    3806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:14:29.843937    3806 out.go:309] Setting ErrFile to fd 2...
	I1212 15:14:29.843941    3806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:14:29.844136    3806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:14:29.844319    3806 out.go:303] Setting JSON to false
	I1212 15:14:29.844343    3806 mustload.go:65] Loading cluster: multinode-449000
	I1212 15:14:29.844378    3806 notify.go:220] Checking for updates...
	I1212 15:14:29.844623    3806 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:14:29.844636    3806 status.go:255] checking status of multinode-449000 ...
	I1212 15:14:29.845046    3806 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:29.845100    3806 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:29.853582    3806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51401
	I1212 15:14:29.853967    3806 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:29.854379    3806 main.go:141] libmachine: Using API Version  1
	I1212 15:14:29.854389    3806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:29.854637    3806 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:29.854748    3806 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:14:29.854835    3806 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:14:29.854894    3806 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3784
	I1212 15:14:29.855850    3806 status.go:330] multinode-449000 host status = "Running" (err=<nil>)
	I1212 15:14:29.855866    3806 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:14:29.856093    3806 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:29.856113    3806 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:29.863878    3806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51403
	I1212 15:14:29.864197    3806 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:29.864543    3806 main.go:141] libmachine: Using API Version  1
	I1212 15:14:29.864557    3806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:29.864777    3806 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:29.864882    3806 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:14:29.864962    3806 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:14:29.865221    3806 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:29.865244    3806 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:29.875418    3806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51405
	I1212 15:14:29.875771    3806 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:29.876105    3806 main.go:141] libmachine: Using API Version  1
	I1212 15:14:29.876132    3806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:29.876355    3806 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:29.876480    3806 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:14:29.876617    3806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:14:29.876641    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:14:29.876736    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:14:29.876811    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:14:29.876891    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:14:29.876985    3806 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:14:29.920403    3806 ssh_runner.go:195] Run: systemctl --version
	I1212 15:14:29.924000    3806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 15:14:29.933320    3806 kubeconfig.go:92] found "multinode-449000" server: "https://192.169.0.13:8443"
	I1212 15:14:29.933341    3806 api_server.go:166] Checking apiserver status ...
	I1212 15:14:29.933377    3806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:29.942396    3806 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1624/cgroup
	I1212 15:14:29.949633    3806 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/pod713a71f0e8f1e4f4a127fa5f9adf437f/7e9188da4ac199b6a80c316f724744376df4e8620954d6151e46da44fed5ade1"
	I1212 15:14:29.949684    3806 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod713a71f0e8f1e4f4a127fa5f9adf437f/7e9188da4ac199b6a80c316f724744376df4e8620954d6151e46da44fed5ade1/freezer.state
	I1212 15:14:29.956034    3806 api_server.go:204] freezer state: "THAWED"
	I1212 15:14:29.956053    3806 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:29.959897    3806 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 15:14:29.959908    3806 status.go:421] multinode-449000 apiserver status = Running (err=<nil>)
	I1212 15:14:29.959918    3806 status.go:257] multinode-449000 status: &{Name:multinode-449000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:398: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr": 
-- stdout --
	multinode-449000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:14:29.843638    3806 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:14:29.843931    3806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:14:29.843937    3806 out.go:309] Setting ErrFile to fd 2...
	I1212 15:14:29.843941    3806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:14:29.844136    3806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:14:29.844319    3806 out.go:303] Setting JSON to false
	I1212 15:14:29.844343    3806 mustload.go:65] Loading cluster: multinode-449000
	I1212 15:14:29.844378    3806 notify.go:220] Checking for updates...
	I1212 15:14:29.844623    3806 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:14:29.844636    3806 status.go:255] checking status of multinode-449000 ...
	I1212 15:14:29.845046    3806 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:29.845100    3806 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:29.853582    3806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51401
	I1212 15:14:29.853967    3806 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:29.854379    3806 main.go:141] libmachine: Using API Version  1
	I1212 15:14:29.854389    3806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:29.854637    3806 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:29.854748    3806 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:14:29.854835    3806 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:14:29.854894    3806 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3784
	I1212 15:14:29.855850    3806 status.go:330] multinode-449000 host status = "Running" (err=<nil>)
	I1212 15:14:29.855866    3806 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:14:29.856093    3806 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:29.856113    3806 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:29.863878    3806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51403
	I1212 15:14:29.864197    3806 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:29.864543    3806 main.go:141] libmachine: Using API Version  1
	I1212 15:14:29.864557    3806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:29.864777    3806 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:29.864882    3806 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:14:29.864962    3806 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:14:29.865221    3806 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:29.865244    3806 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:29.875418    3806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51405
	I1212 15:14:29.875771    3806 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:29.876105    3806 main.go:141] libmachine: Using API Version  1
	I1212 15:14:29.876132    3806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:29.876355    3806 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:29.876480    3806 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:14:29.876617    3806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:14:29.876641    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:14:29.876736    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:14:29.876811    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:14:29.876891    3806 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:14:29.876985    3806 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:14:29.920403    3806 ssh_runner.go:195] Run: systemctl --version
	I1212 15:14:29.924000    3806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 15:14:29.933320    3806 kubeconfig.go:92] found "multinode-449000" server: "https://192.169.0.13:8443"
	I1212 15:14:29.933341    3806 api_server.go:166] Checking apiserver status ...
	I1212 15:14:29.933377    3806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:29.942396    3806 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1624/cgroup
	I1212 15:14:29.949633    3806 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/pod713a71f0e8f1e4f4a127fa5f9adf437f/7e9188da4ac199b6a80c316f724744376df4e8620954d6151e46da44fed5ade1"
	I1212 15:14:29.949684    3806 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod713a71f0e8f1e4f4a127fa5f9adf437f/7e9188da4ac199b6a80c316f724744376df4e8620954d6151e46da44fed5ade1/freezer.state
	I1212 15:14:29.956034    3806 api_server.go:204] freezer state: "THAWED"
	I1212 15:14:29.956053    3806 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:29.959897    3806 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 15:14:29.959908    3806 status.go:421] multinode-449000 apiserver status = Running (err=<nil>)
	I1212 15:14:29.959918    3806 status.go:257] multinode-449000 status: &{Name:multinode-449000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:415: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 logs -n 25: (2.891895514s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- exec          | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- nslookup kubernetes.io            |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- exec          | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- nslookup kubernetes.default       |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000                  | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- exec  -- nslookup                 |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| node    | add -p multinode-449000 -v 3         | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-449000 node stop m03       | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	| node    | multinode-449000 node start          | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | m03 --alsologtostderr                |                  |         |         |                     |                     |
	| node    | list -p multinode-449000             | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	| stop    | -p multinode-449000                  | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST | 12 Dec 23 15:12 PST |
	| start   | -p multinode-449000                  | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:12 PST | 12 Dec 23 15:13 PST |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-449000             | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:13 PST |                     |
	| node    | multinode-449000 node delete         | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:13 PST |                     |
	|         | m03                                  |                  |         |         |                     |                     |
	| stop    | multinode-449000 stop                | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:13 PST | 12 Dec 23 15:13 PST |
	| start   | -p multinode-449000                  | multinode-449000 | jenkins | v1.32.0 | 12 Dec 23 15:13 PST | 12 Dec 23 15:14 PST |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	|         | --driver=hyperkit                    |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 15:13:43
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 15:13:43.329351    3771 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:13:43.329636    3771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:13:43.329643    3771 out.go:309] Setting ErrFile to fd 2...
	I1212 15:13:43.329647    3771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:13:43.329834    3771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:13:43.331227    3771 out.go:303] Setting JSON to false
	I1212 15:13:43.353471    3771 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2594,"bootTime":1702420229,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 15:13:43.353562    3771 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:13:43.375701    3771 out.go:177] * [multinode-449000] minikube v1.32.0 on Darwin 14.2
	I1212 15:13:43.418316    3771 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 15:13:43.418405    3771 notify.go:220] Checking for updates...
	I1212 15:13:43.461075    3771 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:13:43.482155    3771 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:13:43.503045    3771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:13:43.524247    3771 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:13:43.545251    3771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:13:43.566913    3771 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:13:43.567596    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:43.567683    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:43.576898    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51358
	I1212 15:13:43.577265    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:43.577720    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:13:43.577730    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:43.577946    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:43.578045    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:43.578230    3771 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:13:43.578465    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:43.578485    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:43.586432    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51360
	I1212 15:13:43.586750    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:43.587112    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:13:43.587131    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:43.587336    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:43.587444    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:43.616247    3771 out.go:177] * Using the hyperkit driver based on existing profile
	I1212 15:13:43.637552    3771 start.go:298] selected driver: hyperkit
	I1212 15:13:43.637581    3771 start.go:902] validating driver "hyperkit" against &{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:13:43.637805    3771 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:13:43.638034    3771 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:13:43.638218    3771 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 15:13:43.647279    3771 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 15:13:43.651763    3771 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:43.651784    3771 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 15:13:43.654459    3771 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 15:13:43.654534    3771 cni.go:84] Creating CNI manager for ""
	I1212 15:13:43.654543    3771 cni.go:136] 1 nodes found, recommending kindnet
	I1212 15:13:43.654555    3771 start_flags.go:323] config:
	{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:13:43.654732    3771 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:13:43.697196    3771 out.go:177] * Starting control plane node multinode-449000 in cluster multinode-449000
	I1212 15:13:43.718313    3771 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:13:43.718388    3771 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:13:43.718420    3771 cache.go:56] Caching tarball of preloaded images
	I1212 15:13:43.718600    3771 preload.go:174] Found /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:13:43.718619    3771 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:13:43.718770    3771 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/config.json ...
	I1212 15:13:43.719759    3771 start.go:365] acquiring machines lock for multinode-449000: {Name:mk51496c390b032727acf9b9a5f67e389f19ec26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 15:13:43.719892    3771 start.go:369] acquired machines lock for "multinode-449000" in 108.978µs
	I1212 15:13:43.719929    3771 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:13:43.719945    3771 fix.go:54] fixHost starting: 
	I1212 15:13:43.720362    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:13:43.720392    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:13:43.729071    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51362
	I1212 15:13:43.729451    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:13:43.729834    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:13:43.729849    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:13:43.730064    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:13:43.730191    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:43.730286    3771 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:13:43.730380    3771 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:13:43.730435    3771 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3705
	I1212 15:13:43.731386    3771 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 3705 missing from process table
	I1212 15:13:43.731436    3771 fix.go:102] recreateIfNeeded on multinode-449000: state=Stopped err=<nil>
	I1212 15:13:43.731467    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	W1212 15:13:43.731554    3771 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 15:13:43.752018    3771 out.go:177] * Restarting existing hyperkit VM for "multinode-449000" ...
	I1212 15:13:43.774307    3771 main.go:141] libmachine: (multinode-449000) Calling .Start
	I1212 15:13:43.774631    3771 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:13:43.774714    3771 main.go:141] libmachine: (multinode-449000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid
	I1212 15:13:43.776598    3771 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 3705 missing from process table
	I1212 15:13:43.776630    3771 main.go:141] libmachine: (multinode-449000) DBG | pid 3705 is in state "Stopped"
	I1212 15:13:43.776645    3771 main.go:141] libmachine: (multinode-449000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid...
	I1212 15:13:43.776785    3771 main.go:141] libmachine: (multinode-449000) DBG | Using UUID 9fde523a-9943-11ee-8111-f01898ef957c
	I1212 15:13:43.891476    3771 main.go:141] libmachine: (multinode-449000) DBG | Generated MAC f2:78:2:3f:65:80
	I1212 15:13:43.891501    3771 main.go:141] libmachine: (multinode-449000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000
	I1212 15:13:43.891665    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fde523a-9943-11ee-8111-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00044ab70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 15:13:43.891706    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fde523a-9943-11ee-8111-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00044ab70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 15:13:43.891737    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9fde523a-9943-11ee-8111-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/multinode-449000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/1777
7-1259/.minikube/machines/multinode-449000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"}
	I1212 15:13:43.891765    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9fde523a-9943-11ee-8111-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/multinode-449000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/console-ring -f kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"
	I1212 15:13:43.891807    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 15:13:43.893216    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 DEBUG: hyperkit: Pid is 3784
	I1212 15:13:43.893716    3771 main.go:141] libmachine: (multinode-449000) DBG | Attempt 0
	I1212 15:13:43.893745    3771 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:13:43.893840    3771 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3784
	I1212 15:13:43.895617    3771 main.go:141] libmachine: (multinode-449000) DBG | Searching for f2:78:2:3f:65:80 in /var/db/dhcpd_leases ...
	I1212 15:13:43.895659    3771 main.go:141] libmachine: (multinode-449000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I1212 15:13:43.895692    3771 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3a69}
	I1212 15:13:43.895711    3771 main.go:141] libmachine: (multinode-449000) DBG | Found match: f2:78:2:3f:65:80
	I1212 15:13:43.895724    3771 main.go:141] libmachine: (multinode-449000) DBG | IP: 192.169.0.13
	I1212 15:13:43.895762    3771 main.go:141] libmachine: (multinode-449000) Calling .GetConfigRaw
	I1212 15:13:43.896425    3771 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:13:43.896587    3771 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/config.json ...
	I1212 15:13:43.896935    3771 machine.go:88] provisioning docker machine ...
	I1212 15:13:43.896953    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:43.897103    3771 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:13:43.897207    3771 buildroot.go:166] provisioning hostname "multinode-449000"
	I1212 15:13:43.897217    3771 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:13:43.897332    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:43.897465    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:43.897554    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:43.897657    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:43.897751    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:43.897899    3771 main.go:141] libmachine: Using SSH client type: native
	I1212 15:13:43.898337    3771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:13:43.898354    3771 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000 && echo "multinode-449000" | sudo tee /etc/hostname
	I1212 15:13:43.901231    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 15:13:43.958533    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 15:13:43.959211    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:13:43.959226    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:13:43.959235    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:13:43.959244    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:13:44.327912    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 15:13:44.327926    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 15:13:44.431915    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:13:44.431937    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:13:44.431997    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:13:44.432015    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:13:44.432841    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 15:13:44.432859    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 15:13:49.344220    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:49 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 15:13:49.344264    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:49 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 15:13:49.344275    3771 main.go:141] libmachine: (multinode-449000) DBG | 2023/12/12 15:13:49 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 15:13:54.998755    3771 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000
	
	I1212 15:13:54.998780    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:54.998914    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:54.999010    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:54.999111    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:54.999201    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:54.999386    3771 main.go:141] libmachine: Using SSH client type: native
	I1212 15:13:54.999646    3771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:13:54.999659    3771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 15:13:55.089255    3771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:13:55.089275    3771 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17777-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17777-1259/.minikube}
	I1212 15:13:55.089288    3771 buildroot.go:174] setting up certificates
	I1212 15:13:55.089300    3771 provision.go:83] configureAuth start
	I1212 15:13:55.089307    3771 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I1212 15:13:55.089438    3771 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:13:55.089529    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:55.089616    3771 provision.go:138] copyHostCerts
	I1212 15:13:55.089647    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:13:55.089693    3771 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem, removing ...
	I1212 15:13:55.089702    3771 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:13:55.089847    3771 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem (1082 bytes)
	I1212 15:13:55.090071    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:13:55.090099    3771 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem, removing ...
	I1212 15:13:55.090103    3771 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:13:55.090190    3771 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem (1123 bytes)
	I1212 15:13:55.090343    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:13:55.090369    3771 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem, removing ...
	I1212 15:13:55.090373    3771 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:13:55.090450    3771 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem (1675 bytes)
	I1212 15:13:55.090611    3771 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem org=jenkins.multinode-449000 san=[192.169.0.13 192.169.0.13 localhost 127.0.0.1 minikube multinode-449000]
	I1212 15:13:55.271107    3771 provision.go:172] copyRemoteCerts
	I1212 15:13:55.271169    3771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 15:13:55.271189    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:55.271337    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:55.271437    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:55.271555    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:55.271652    3771 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:13:55.317278    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 15:13:55.317339    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 15:13:55.333169    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 15:13:55.333221    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 15:13:55.349042    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 15:13:55.349098    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 15:13:55.365041    3771 provision.go:86] duration metric: configureAuth took 275.72719ms
	I1212 15:13:55.365052    3771 buildroot.go:189] setting minikube options for container-runtime
	I1212 15:13:55.365183    3771 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:13:55.365209    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:55.365343    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:55.365429    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:55.365504    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:55.365590    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:55.365677    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:55.365805    3771 main.go:141] libmachine: Using SSH client type: native
	I1212 15:13:55.366042    3771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:13:55.366050    3771 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 15:13:55.449153    3771 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 15:13:55.449165    3771 buildroot.go:70] root file system type: tmpfs
	I1212 15:13:55.449241    3771 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 15:13:55.449253    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:55.449379    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:55.449475    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:55.449582    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:55.449697    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:55.449817    3771 main.go:141] libmachine: Using SSH client type: native
	I1212 15:13:55.450066    3771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:13:55.450115    3771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 15:13:55.539935    3771 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 15:13:55.539954    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:55.540093    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:55.540209    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:55.540299    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:55.540394    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:55.540521    3771 main.go:141] libmachine: Using SSH client type: native
	I1212 15:13:55.540768    3771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:13:55.540781    3771 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 15:13:56.129664    3771 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 15:13:56.129680    3771 machine.go:91] provisioned docker machine in 12.23281994s
	I1212 15:13:56.129689    3771 start.go:300] post-start starting for "multinode-449000" (driver="hyperkit")
	I1212 15:13:56.129699    3771 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 15:13:56.129710    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:56.129887    3771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 15:13:56.129900    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:56.129990    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:56.130094    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:56.130194    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:56.130268    3771 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:13:56.175772    3771 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 15:13:56.178246    3771 command_runner.go:130] > NAME=Buildroot
	I1212 15:13:56.178255    3771 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 15:13:56.178261    3771 command_runner.go:130] > ID=buildroot
	I1212 15:13:56.178265    3771 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 15:13:56.178271    3771 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 15:13:56.178350    3771 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 15:13:56.178359    3771 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/addons for local assets ...
	I1212 15:13:56.178448    3771 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/files for local assets ...
	I1212 15:13:56.178621    3771 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> 17202.pem in /etc/ssl/certs
	I1212 15:13:56.178627    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> /etc/ssl/certs/17202.pem
	I1212 15:13:56.178823    3771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 15:13:56.184440    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:13:56.200705    3771 start.go:303] post-start completed in 71.007035ms
	I1212 15:13:56.200715    3771 fix.go:56] fixHost completed within 12.48086098s
	I1212 15:13:56.200732    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:56.200861    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:56.200958    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:56.201052    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:56.201150    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:56.201268    3771 main.go:141] libmachine: Using SSH client type: native
	I1212 15:13:56.201506    3771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 15:13:56.201514    3771 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 15:13:56.283943    3771 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422836.360274147
	
	I1212 15:13:56.283954    3771 fix.go:206] guest clock: 1702422836.360274147
	I1212 15:13:56.283959    3771 fix.go:219] Guest: 2023-12-12 15:13:56.360274147 -0800 PST Remote: 2023-12-12 15:13:56.200717 -0800 PST m=+12.914984199 (delta=159.557147ms)
	I1212 15:13:56.283979    3771 fix.go:190] guest clock delta is within tolerance: 159.557147ms
	I1212 15:13:56.283982    3771 start.go:83] releasing machines lock for "multinode-449000", held for 12.564165817s
	I1212 15:13:56.283999    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:56.284131    3771 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:13:56.284237    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:56.284578    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:56.284685    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:13:56.284760    3771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 15:13:56.284793    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:56.284885    3771 ssh_runner.go:195] Run: cat /version.json
	I1212 15:13:56.284894    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:56.284896    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:13:56.284996    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:13:56.285032    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:56.285124    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:13:56.285125    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:56.285224    3771 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:13:56.285260    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:13:56.285340    3771 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:13:56.327896    3771 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 15:13:56.328083    3771 ssh_runner.go:195] Run: systemctl --version
	I1212 15:13:56.378492    3771 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 15:13:56.379477    3771 command_runner.go:130] > systemd 247 (247)
	I1212 15:13:56.379500    3771 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 15:13:56.379599    3771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 15:13:56.383781    3771 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 15:13:56.383812    3771 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 15:13:56.383854    3771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 15:13:56.393739    3771 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 15:13:56.393765    3771 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 15:13:56.393775    3771 start.go:475] detecting cgroup driver to use...
	I1212 15:13:56.393882    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:13:56.406190    3771 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 15:13:56.406474    3771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 15:13:56.413617    3771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 15:13:56.420666    3771 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 15:13:56.420709    3771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 15:13:56.427628    3771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:13:56.434787    3771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 15:13:56.441782    3771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:13:56.448834    3771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 15:13:56.456191    3771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 15:13:56.463155    3771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 15:13:56.469218    3771 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 15:13:56.469405    3771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 15:13:56.475880    3771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:13:56.562408    3771 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 15:13:56.574546    3771 start.go:475] detecting cgroup driver to use...
	I1212 15:13:56.574619    3771 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 15:13:56.583633    3771 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 15:13:56.583646    3771 command_runner.go:130] > [Unit]
	I1212 15:13:56.583651    3771 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 15:13:56.583656    3771 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 15:13:56.583660    3771 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 15:13:56.583665    3771 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 15:13:56.583670    3771 command_runner.go:130] > StartLimitBurst=3
	I1212 15:13:56.583675    3771 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 15:13:56.583679    3771 command_runner.go:130] > [Service]
	I1212 15:13:56.583683    3771 command_runner.go:130] > Type=notify
	I1212 15:13:56.583688    3771 command_runner.go:130] > Restart=on-failure
	I1212 15:13:56.583694    3771 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 15:13:56.583701    3771 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 15:13:56.583707    3771 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 15:13:56.583712    3771 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 15:13:56.583718    3771 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 15:13:56.583723    3771 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 15:13:56.583728    3771 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 15:13:56.583736    3771 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 15:13:56.583742    3771 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 15:13:56.583748    3771 command_runner.go:130] > ExecStart=
	I1212 15:13:56.583759    3771 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I1212 15:13:56.583764    3771 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 15:13:56.583771    3771 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 15:13:56.583777    3771 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 15:13:56.583780    3771 command_runner.go:130] > LimitNOFILE=infinity
	I1212 15:13:56.583785    3771 command_runner.go:130] > LimitNPROC=infinity
	I1212 15:13:56.583788    3771 command_runner.go:130] > LimitCORE=infinity
	I1212 15:13:56.583793    3771 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 15:13:56.583797    3771 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 15:13:56.583808    3771 command_runner.go:130] > TasksMax=infinity
	I1212 15:13:56.583811    3771 command_runner.go:130] > TimeoutStartSec=0
	I1212 15:13:56.583817    3771 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 15:13:56.583820    3771 command_runner.go:130] > Delegate=yes
	I1212 15:13:56.583826    3771 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 15:13:56.583831    3771 command_runner.go:130] > KillMode=process
	I1212 15:13:56.583835    3771 command_runner.go:130] > [Install]
	I1212 15:13:56.583841    3771 command_runner.go:130] > WantedBy=multi-user.target
	I1212 15:13:56.583901    3771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:13:56.596831    3771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 15:13:56.611786    3771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:13:56.620327    3771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:13:56.628591    3771 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 15:13:56.647750    3771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:13:56.656581    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:13:56.668647    3771 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 15:13:56.668925    3771 ssh_runner.go:195] Run: which cri-dockerd
	I1212 15:13:56.671286    3771 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 15:13:56.671487    3771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 15:13:56.678026    3771 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 15:13:56.689143    3771 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 15:13:56.774359    3771 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 15:13:56.867135    3771 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 15:13:56.867220    3771 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 15:13:56.878967    3771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:13:56.966300    3771 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 15:13:58.267688    3771 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.301372836s)
	I1212 15:13:58.267758    3771 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:13:58.349399    3771 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 15:13:58.447130    3771 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:13:58.543936    3771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:13:58.638744    3771 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 15:13:58.650768    3771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:13:58.743074    3771 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 15:13:58.796444    3771 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 15:13:58.796524    3771 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 15:13:58.800333    3771 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 15:13:58.800354    3771 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 15:13:58.800365    3771 command_runner.go:130] > Device: 16h/22d	Inode: 917         Links: 1
	I1212 15:13:58.800375    3771 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 15:13:58.800385    3771 command_runner.go:130] > Access: 2023-12-12 23:13:58.831122984 +0000
	I1212 15:13:58.800400    3771 command_runner.go:130] > Modify: 2023-12-12 23:13:58.831122984 +0000
	I1212 15:13:58.800408    3771 command_runner.go:130] > Change: 2023-12-12 23:13:58.832122984 +0000
	I1212 15:13:58.800412    3771 command_runner.go:130] >  Birth: -
	I1212 15:13:58.800558    3771 start.go:543] Will wait 60s for crictl version
	I1212 15:13:58.800607    3771 ssh_runner.go:195] Run: which crictl
	I1212 15:13:58.803008    3771 command_runner.go:130] > /usr/bin/crictl
	I1212 15:13:58.803218    3771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 15:13:58.833139    3771 command_runner.go:130] > Version:  0.1.0
	I1212 15:13:58.833151    3771 command_runner.go:130] > RuntimeName:  docker
	I1212 15:13:58.833155    3771 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 15:13:58.833159    3771 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 15:13:58.834236    3771 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 15:13:58.834300    3771 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 15:13:58.850818    3771 command_runner.go:130] > 24.0.7
	I1212 15:13:58.851778    3771 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 15:13:58.869096    3771 command_runner.go:130] > 24.0.7
	I1212 15:13:58.891455    3771 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 15:13:58.891503    3771 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I1212 15:13:58.891894    3771 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1212 15:13:58.895939    3771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 15:13:58.904684    3771 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:13:58.904741    3771 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 15:13:58.917097    3771 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 15:13:58.917109    3771 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 15:13:58.917114    3771 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 15:13:58.917118    3771 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 15:13:58.917128    3771 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 15:13:58.917133    3771 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 15:13:58.917137    3771 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 15:13:58.917142    3771 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 15:13:58.917148    3771 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 15:13:58.917694    3771 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 15:13:58.917710    3771 docker.go:601] Images already preloaded, skipping extraction
	I1212 15:13:58.917783    3771 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 15:13:58.930386    3771 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 15:13:58.930398    3771 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 15:13:58.930402    3771 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 15:13:58.930407    3771 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 15:13:58.930412    3771 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 15:13:58.930416    3771 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 15:13:58.930421    3771 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 15:13:58.930425    3771 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 15:13:58.930430    3771 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 15:13:58.931035    3771 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 15:13:58.931053    3771 cache_images.go:84] Images are preloaded, skipping loading
	I1212 15:13:58.931126    3771 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 15:13:58.948121    3771 command_runner.go:130] > cgroupfs
	I1212 15:13:58.948691    3771 cni.go:84] Creating CNI manager for ""
	I1212 15:13:58.948700    3771 cni.go:136] 1 nodes found, recommending kindnet
	I1212 15:13:58.948711    3771 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 15:13:58.948731    3771 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-449000 NodeName:multinode-449000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 15:13:58.948804    3771 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-449000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 15:13:58.948856    3771 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-449000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 15:13:58.948911    3771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 15:13:58.954663    3771 command_runner.go:130] > kubeadm
	I1212 15:13:58.954671    3771 command_runner.go:130] > kubectl
	I1212 15:13:58.954674    3771 command_runner.go:130] > kubelet
	I1212 15:13:58.954831    3771 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 15:13:58.954876    3771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 15:13:58.960394    3771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1212 15:13:58.971305    3771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 15:13:58.982542    3771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1212 15:13:58.993760    3771 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I1212 15:13:58.996007    3771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 15:13:59.004458    3771 certs.go:56] Setting up /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000 for IP: 192.169.0.13
	I1212 15:13:59.004476    3771 certs.go:190] acquiring lock for shared ca certs: {Name:mkc116deb15cbfbe8939fd5907655f41e3f69c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:13:59.004608    3771 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key
	I1212 15:13:59.004665    3771 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key
	I1212 15:13:59.004758    3771 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key
	I1212 15:13:59.004826    3771 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key.ff8d457b
	I1212 15:13:59.004876    3771 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key
	I1212 15:13:59.004884    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 15:13:59.004906    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 15:13:59.004924    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 15:13:59.004940    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 15:13:59.004957    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 15:13:59.004979    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 15:13:59.004996    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 15:13:59.005012    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 15:13:59.005101    3771 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem (1338 bytes)
	W1212 15:13:59.005143    3771 certs.go:433] ignoring /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720_empty.pem, impossibly tiny 0 bytes
	I1212 15:13:59.005152    3771 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 15:13:59.005185    3771 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem (1082 bytes)
	I1212 15:13:59.005214    3771 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem (1123 bytes)
	I1212 15:13:59.005242    3771 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem (1675 bytes)
	I1212 15:13:59.005308    3771 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:13:59.005337    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem -> /usr/share/ca-certificates/1720.pem
	I1212 15:13:59.005357    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> /usr/share/ca-certificates/17202.pem
	I1212 15:13:59.005374    3771 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:13:59.005786    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 15:13:59.021936    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 15:13:59.038361    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 15:13:59.054693    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 15:13:59.071411    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 15:13:59.087340    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 15:13:59.103877    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 15:13:59.119964    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 15:13:59.136448    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem --> /usr/share/ca-certificates/1720.pem (1338 bytes)
	I1212 15:13:59.153111    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /usr/share/ca-certificates/17202.pem (1708 bytes)
	I1212 15:13:59.169208    3771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 15:13:59.185327    3771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 15:13:59.196733    3771 ssh_runner.go:195] Run: openssl version
	I1212 15:13:59.199986    3771 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 15:13:59.200202    3771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1720.pem && ln -fs /usr/share/ca-certificates/1720.pem /etc/ssl/certs/1720.pem"
	I1212 15:13:59.206477    3771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1720.pem
	I1212 15:13:59.209238    3771 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:59 /usr/share/ca-certificates/1720.pem
	I1212 15:13:59.209416    3771 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:59 /usr/share/ca-certificates/1720.pem
	I1212 15:13:59.209451    3771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1720.pem
	I1212 15:13:59.212711    3771 command_runner.go:130] > 51391683
	I1212 15:13:59.212898    3771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1720.pem /etc/ssl/certs/51391683.0"
	I1212 15:13:59.219216    3771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17202.pem && ln -fs /usr/share/ca-certificates/17202.pem /etc/ssl/certs/17202.pem"
	I1212 15:13:59.225539    3771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17202.pem
	I1212 15:13:59.228367    3771 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:59 /usr/share/ca-certificates/17202.pem
	I1212 15:13:59.228571    3771 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:59 /usr/share/ca-certificates/17202.pem
	I1212 15:13:59.228607    3771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17202.pem
	I1212 15:13:59.231859    3771 command_runner.go:130] > 3ec20f2e
	I1212 15:13:59.232097    3771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17202.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 15:13:59.238744    3771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 15:13:59.245027    3771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:13:59.247747    3771 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:13:59.247863    3771 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:13:59.247894    3771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:13:59.251159    3771 command_runner.go:130] > b5213941
	I1212 15:13:59.251357    3771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 15:13:59.257758    3771 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 15:13:59.260339    3771 command_runner.go:130] > ca.crt
	I1212 15:13:59.260347    3771 command_runner.go:130] > ca.key
	I1212 15:13:59.260351    3771 command_runner.go:130] > healthcheck-client.crt
	I1212 15:13:59.260355    3771 command_runner.go:130] > healthcheck-client.key
	I1212 15:13:59.260358    3771 command_runner.go:130] > peer.crt
	I1212 15:13:59.260362    3771 command_runner.go:130] > peer.key
	I1212 15:13:59.260365    3771 command_runner.go:130] > server.crt
	I1212 15:13:59.260371    3771 command_runner.go:130] > server.key
	I1212 15:13:59.260443    3771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 15:13:59.263975    3771 command_runner.go:130] > Certificate will not expire
	I1212 15:13:59.264178    3771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 15:13:59.267482    3771 command_runner.go:130] > Certificate will not expire
	I1212 15:13:59.267709    3771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 15:13:59.271048    3771 command_runner.go:130] > Certificate will not expire
	I1212 15:13:59.271280    3771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 15:13:59.274602    3771 command_runner.go:130] > Certificate will not expire
	I1212 15:13:59.274795    3771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 15:13:59.278048    3771 command_runner.go:130] > Certificate will not expire
	I1212 15:13:59.278289    3771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 15:13:59.281595    3771 command_runner.go:130] > Certificate will not expire
	I1212 15:13:59.281774    3771 kubeadm.go:404] StartCluster: {Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:multinode-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:13:59.281860    3771 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 15:13:59.294591    3771 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 15:13:59.300406    3771 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 15:13:59.300416    3771 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 15:13:59.300421    3771 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 15:13:59.300424    3771 command_runner.go:130] > member
	I1212 15:13:59.300625    3771 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 15:13:59.300640    3771 kubeadm.go:636] restartCluster start
	I1212 15:13:59.300680    3771 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 15:13:59.306370    3771 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:13:59.306651    3771 kubeconfig.go:135] verify returned: extract IP: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:13:59.306715    3771 kubeconfig.go:146] "multinode-449000" context is missing from /Users/jenkins/minikube-integration/17777-1259/kubeconfig - will repair!
	I1212 15:13:59.306883    3771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/kubeconfig: {Name:mk59d3fcca7c93e43d82a40f16bbb777946cd182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:13:59.307546    3771 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:13:59.307717    3771 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 15:13:59.308124    3771 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 15:13:59.308280    3771 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 15:13:59.313750    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:13:59.313787    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:13:59.321379    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:13:59.321388    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:13:59.321423    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:13:59.328732    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:13:59.828960    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:13:59.829065    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:13:59.838366    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:00.328823    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:00.328972    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:00.337545    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:00.829959    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:00.830089    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:00.839002    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:01.328864    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:01.328966    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:01.338165    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:01.829114    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:01.829199    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:01.837903    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:02.329065    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:02.329172    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:02.338978    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:02.828980    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:02.829115    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:02.838329    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:03.329196    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:03.329290    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:03.338361    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:03.830825    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:03.830990    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:03.840303    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:04.329256    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:04.329390    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:04.338468    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:04.829457    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:04.829595    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:04.839285    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:05.330838    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:05.330982    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:05.340974    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:05.830803    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:05.831000    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:05.840917    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:06.329487    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:06.329593    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:06.339038    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:06.828786    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:06.828891    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:06.838185    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:07.329192    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:07.329370    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:07.338973    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:07.828795    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:07.828934    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:07.837898    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:08.329759    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:08.329917    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:08.362033    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:08.830769    3771 api_server.go:166] Checking apiserver status ...
	I1212 15:14:08.830889    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 15:14:08.840262    3771 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 15:14:09.314654    3771 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 15:14:09.314676    3771 kubeadm.go:1135] stopping kube-system containers ...
	I1212 15:14:09.314779    3771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 15:14:09.330716    3771 command_runner.go:130] > 95bc5fcd783f
	I1212 15:14:09.330728    3771 command_runner.go:130] > 349aceac4c90
	I1212 15:14:09.330732    3771 command_runner.go:130] > 9d7e822b848f
	I1212 15:14:09.330735    3771 command_runner.go:130] > 29a2e0536a84
	I1212 15:14:09.330739    3771 command_runner.go:130] > 58bbe956bbc0
	I1212 15:14:09.330761    3771 command_runner.go:130] > bc270a1f54f3
	I1212 15:14:09.330769    3771 command_runner.go:130] > 8189af807d9f
	I1212 15:14:09.330772    3771 command_runner.go:130] > 58468ea0d336
	I1212 15:14:09.330776    3771 command_runner.go:130] > f52a90b7997c
	I1212 15:14:09.330784    3771 command_runner.go:130] > cbf4f7124455
	I1212 15:14:09.330791    3771 command_runner.go:130] > d57c6b9df1bf
	I1212 15:14:09.330795    3771 command_runner.go:130] > a65940e255b0
	I1212 15:14:09.330798    3771 command_runner.go:130] > e84049d10a45
	I1212 15:14:09.330804    3771 command_runner.go:130] > e22fa4a926f7
	I1212 15:14:09.330807    3771 command_runner.go:130] > de90edd09b0e
	I1212 15:14:09.330811    3771 command_runner.go:130] > 4a6892d4d834
	I1212 15:14:09.331318    3771 docker.go:469] Stopping containers: [95bc5fcd783f 349aceac4c90 9d7e822b848f 29a2e0536a84 58bbe956bbc0 bc270a1f54f3 8189af807d9f 58468ea0d336 f52a90b7997c cbf4f7124455 d57c6b9df1bf a65940e255b0 e84049d10a45 e22fa4a926f7 de90edd09b0e 4a6892d4d834]
	I1212 15:14:09.331394    3771 ssh_runner.go:195] Run: docker stop 95bc5fcd783f 349aceac4c90 9d7e822b848f 29a2e0536a84 58bbe956bbc0 bc270a1f54f3 8189af807d9f 58468ea0d336 f52a90b7997c cbf4f7124455 d57c6b9df1bf a65940e255b0 e84049d10a45 e22fa4a926f7 de90edd09b0e 4a6892d4d834
	I1212 15:14:09.345453    3771 command_runner.go:130] > 95bc5fcd783f
	I1212 15:14:09.345505    3771 command_runner.go:130] > 349aceac4c90
	I1212 15:14:09.345850    3771 command_runner.go:130] > 9d7e822b848f
	I1212 15:14:09.345856    3771 command_runner.go:130] > 29a2e0536a84
	I1212 15:14:09.345868    3771 command_runner.go:130] > 58bbe956bbc0
	I1212 15:14:09.345871    3771 command_runner.go:130] > bc270a1f54f3
	I1212 15:14:09.345875    3771 command_runner.go:130] > 8189af807d9f
	I1212 15:14:09.345879    3771 command_runner.go:130] > 58468ea0d336
	I1212 15:14:09.345883    3771 command_runner.go:130] > f52a90b7997c
	I1212 15:14:09.345888    3771 command_runner.go:130] > cbf4f7124455
	I1212 15:14:09.345892    3771 command_runner.go:130] > d57c6b9df1bf
	I1212 15:14:09.346066    3771 command_runner.go:130] > a65940e255b0
	I1212 15:14:09.346192    3771 command_runner.go:130] > e84049d10a45
	I1212 15:14:09.346295    3771 command_runner.go:130] > e22fa4a926f7
	I1212 15:14:09.346421    3771 command_runner.go:130] > de90edd09b0e
	I1212 15:14:09.346521    3771 command_runner.go:130] > 4a6892d4d834
	I1212 15:14:09.347339    3771 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 15:14:09.357712    3771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 15:14:09.363624    3771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 15:14:09.363635    3771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 15:14:09.363641    3771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 15:14:09.363647    3771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 15:14:09.363777    3771 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 15:14:09.363815    3771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 15:14:09.369988    3771 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 15:14:09.369997    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 15:14:09.437623    3771 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 15:14:09.437872    3771 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 15:14:09.438235    3771 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 15:14:09.438608    3771 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 15:14:09.438955    3771 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1212 15:14:09.439317    3771 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1212 15:14:09.439780    3771 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1212 15:14:09.440129    3771 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1212 15:14:09.440409    3771 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1212 15:14:09.440764    3771 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 15:14:09.441081    3771 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 15:14:09.442206    3771 command_runner.go:130] > [certs] Using the existing "sa" key
	I1212 15:14:09.442271    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 15:14:09.480810    3771 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 15:14:09.605863    3771 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 15:14:09.705903    3771 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 15:14:09.836681    3771 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 15:14:09.936485    3771 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 15:14:09.938390    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 15:14:09.981114    3771 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 15:14:09.981772    3771 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 15:14:09.981781    3771 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 15:14:10.074085    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 15:14:10.118876    3771 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 15:14:10.118890    3771 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 15:14:10.120790    3771 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 15:14:10.121682    3771 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 15:14:10.123236    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 15:14:10.166497    3771 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 15:14:10.171566    3771 api_server.go:52] waiting for apiserver process to appear ...
	I1212 15:14:10.171623    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:10.181356    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:10.690556    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:11.191585    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:11.690651    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:11.701020    3771 command_runner.go:130] > 1624
	I1212 15:14:11.701053    3771 api_server.go:72] duration metric: took 1.529503187s to wait for apiserver process to appear ...
	I1212 15:14:11.701060    3771 api_server.go:88] waiting for apiserver healthz status ...
	I1212 15:14:11.701074    3771 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:13.976743    3771 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 15:14:13.976759    3771 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 15:14:13.976777    3771 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:13.982576    3771 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 15:14:13.982590    3771 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 15:14:14.483940    3771 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:14.489507    3771 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 15:14:14.489522    3771 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 15:14:14.983721    3771 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:14.988624    3771 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 15:14:14.988638    3771 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 15:14:15.482834    3771 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:15.487405    3771 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 15:14:15.487485    3771 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I1212 15:14:15.487492    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:15.487500    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:15.487505    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:15.492732    3771 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 15:14:15.492741    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:15.492747    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:15.492752    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:15.492757    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:15.492761    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:15.492766    3771 round_trippers.go:580]     Content-Length: 264
	I1212 15:14:15.492771    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:15 GMT
	I1212 15:14:15.492776    3771 round_trippers.go:580]     Audit-Id: 03aba207-f2c7-4454-8960-b7e9e66723f2
	I1212 15:14:15.492795    3771 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 15:14:15.492843    3771 api_server.go:141] control plane version: v1.28.4
	I1212 15:14:15.492852    3771 api_server.go:131] duration metric: took 3.79181405s to wait for apiserver health ...
	I1212 15:14:15.492858    3771 cni.go:84] Creating CNI manager for ""
	I1212 15:14:15.492862    3771 cni.go:136] 1 nodes found, recommending kindnet
	I1212 15:14:15.515206    3771 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 15:14:15.536159    3771 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 15:14:15.540716    3771 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 15:14:15.540728    3771 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 15:14:15.540739    3771 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 15:14:15.540744    3771 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 15:14:15.540749    3771 command_runner.go:130] > Access: 2023-12-12 23:13:52.853122676 +0000
	I1212 15:14:15.540754    3771 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 15:14:15.540758    3771 command_runner.go:130] > Change: 2023-12-12 23:13:51.063496692 +0000
	I1212 15:14:15.540762    3771 command_runner.go:130] >  Birth: -
	I1212 15:14:15.541039    3771 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 15:14:15.541048    3771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 15:14:15.554226    3771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 15:14:16.342973    3771 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 15:14:16.345606    3771 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 15:14:16.355014    3771 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 15:14:16.365035    3771 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 15:14:16.367176    3771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 15:14:16.367231    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:14:16.367237    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.367243    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.367249    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.369324    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:16.369332    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.369337    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.369342    3771 round_trippers.go:580]     Audit-Id: 91a1f313-6784-47dc-bb52-9048cb18eba9
	I1212 15:14:16.369346    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.369351    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.369356    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.369360    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.370057    3771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"491"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"470","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57312 chars]
	I1212 15:14:16.372520    3771 system_pods.go:59] 8 kube-system pods found
	I1212 15:14:16.372537    3771 system_pods.go:61] "coredns-5dd5756b68-gbw2q" [09d20e99-6d1a-46d5-858f-71585ab9e532] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 15:14:16.372543    3771 system_pods.go:61] "etcd-multinode-449000" [193c5da5-9957-4b0c-ac1f-0883f287dc0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 15:14:16.372553    3771 system_pods.go:61] "kindnet-zkv5v" [92e2a49a-0055-4ae7-a167-fb51b4275183] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 15:14:16.372560    3771 system_pods.go:61] "kube-apiserver-multinode-449000" [d0340375-33dc-42b7-9b1d-6e66ff24d07b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 15:14:16.372566    3771 system_pods.go:61] "kube-controller-manager-multinode-449000" [3cdec7d9-450b-47be-b93b-a5f3985415fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 15:14:16.372571    3771 system_pods.go:61] "kube-proxy-hxq22" [d330b0b4-7d3f-4386-a72d-cb235945c494] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 15:14:16.372576    3771 system_pods.go:61] "kube-scheduler-multinode-449000" [6eda8382-3903-4ab4-96fb-afc4948c144b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 15:14:16.372581    3771 system_pods.go:61] "storage-provisioner" [11d647a8-b7f7-411a-b861-f3d109085770] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 15:14:16.372586    3771 system_pods.go:74] duration metric: took 5.402679ms to wait for pod list to return data ...
	I1212 15:14:16.372593    3771 node_conditions.go:102] verifying NodePressure condition ...
	I1212 15:14:16.372634    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I1212 15:14:16.372639    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.372645    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.372650    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.374408    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.374417    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.374423    3771 round_trippers.go:580]     Audit-Id: 0af28d9d-0e67-435c-9077-4b3978ee3598
	I1212 15:14:16.374428    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.374433    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.374438    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.374443    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.374450    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.374557    3771 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"491"},"items":[{"metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5183 chars]
	I1212 15:14:16.374891    3771 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 15:14:16.374905    3771 node_conditions.go:123] node cpu capacity is 2
	I1212 15:14:16.374916    3771 node_conditions.go:105] duration metric: took 2.318901ms to run NodePressure ...
	I1212 15:14:16.374926    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 15:14:16.471862    3771 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 15:14:16.509908    3771 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 15:14:16.510830    3771 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 15:14:16.510903    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1212 15:14:16.510912    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.510919    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.510958    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.513433    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:16.513442    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.513447    3771 round_trippers.go:580]     Audit-Id: 94e1a2d2-b00b-476e-ac99-915d31c38591
	I1212 15:14:16.513455    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.513463    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.513477    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.513487    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.513495    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.513886    3771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"193c5da5-9957-4b0c-ac1f-0883f287dc0d","resourceVersion":"477","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.mirror":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.seen":"2023-12-12T23:13:04.726760505Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 29734 chars]
	I1212 15:14:16.514594    3771 kubeadm.go:787] kubelet initialised
	I1212 15:14:16.514604    3771 kubeadm.go:788] duration metric: took 3.764962ms waiting for restarted kubelet to initialise ...
	I1212 15:14:16.514611    3771 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 15:14:16.514638    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:14:16.514643    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.514649    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.514654    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.516570    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.516580    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.516587    3771 round_trippers.go:580]     Audit-Id: f053abe0-44fe-4109-8e74-53165e6809b6
	I1212 15:14:16.516598    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.516607    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.516613    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.516622    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.516632    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.517166    3771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"470","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57312 chars]
	I1212 15:14:16.518459    3771 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:16.518501    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:14:16.518507    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.518513    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.518520    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.519796    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.519806    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.519811    3771 round_trippers.go:580]     Audit-Id: 2b955cb0-45ac-4553-b1c3-7ce43f728bdb
	I1212 15:14:16.519816    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.519823    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.519830    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.519836    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.519846    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.520056    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"470","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 15:14:16.520307    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:16.520314    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.520323    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.520329    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.521683    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.521694    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.521700    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.521706    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.521711    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.521716    3771 round_trippers.go:580]     Audit-Id: 6eec48cc-b8a4-43fe-b03c-c9ca0b92f1c9
	I1212 15:14:16.521721    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.521725    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.521801    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:16.521984    3771 pod_ready.go:97] node "multinode-449000" hosting pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.521993    3771 pod_ready.go:81] duration metric: took 3.523973ms waiting for pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace to be "Ready" ...
	E1212 15:14:16.521999    3771 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.522006    3771 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:16.522035    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I1212 15:14:16.522040    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.522046    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.522052    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.523095    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.523103    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.523108    3771 round_trippers.go:580]     Audit-Id: 1fdc9f31-4d1d-4cc2-a1e1-7a7598a887d0
	I1212 15:14:16.523113    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.523117    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.523122    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.523127    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.523132    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.523272    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"193c5da5-9957-4b0c-ac1f-0883f287dc0d","resourceVersion":"477","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.mirror":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.seen":"2023-12-12T23:13:04.726760505Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6285 chars]
	I1212 15:14:16.523491    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:16.523501    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.523507    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.523512    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.524719    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.524727    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.524738    3771 round_trippers.go:580]     Audit-Id: d5b10b59-f2fc-473c-8a00-fd1067f6ce84
	I1212 15:14:16.524742    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.524750    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.524754    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.524758    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.524763    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.524884    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:16.525066    3771 pod_ready.go:97] node "multinode-449000" hosting pod "etcd-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.525075    3771 pod_ready.go:81] duration metric: took 3.064124ms waiting for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E1212 15:14:16.525081    3771 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "etcd-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.525088    3771 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:16.525121    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I1212 15:14:16.525125    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.525131    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.525136    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.526274    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.526281    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.526286    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.526290    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.526295    3771 round_trippers.go:580]     Audit-Id: 9156df31-2e49-4e9a-86ba-9a6bc4cf8cc4
	I1212 15:14:16.526300    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.526305    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.526309    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.526519    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"d0340375-33dc-42b7-9b1d-6e66ff24d07b","resourceVersion":"474","creationTimestamp":"2023-12-12T23:13:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.mirror":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.seen":"2023-12-12T23:12:58.089999663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7841 chars]
	I1212 15:14:16.526751    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:16.526757    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.526763    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.526768    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.527953    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:16.527972    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.527986    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.527995    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.528001    3771 round_trippers.go:580]     Audit-Id: a0e40730-20df-4bf5-bbe9-745590965bf3
	I1212 15:14:16.528006    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.528010    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.528015    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.528107    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:16.528301    3771 pod_ready.go:97] node "multinode-449000" hosting pod "kube-apiserver-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.528310    3771 pod_ready.go:81] duration metric: took 3.218066ms waiting for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E1212 15:14:16.528316    3771 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-apiserver-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.528325    3771 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:16.568215    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:16.568235    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.568276    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.568287    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.570495    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:16.570507    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.570515    3771 round_trippers.go:580]     Audit-Id: 59ec53eb-ab34-4589-8e9c-18528266225c
	I1212 15:14:16.570523    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.570531    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.570537    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.570543    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.570552    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.570762    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"475","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 15:14:16.768018    3771 request.go:629] Waited for 196.836714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:16.768102    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:16.768145    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.768159    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.768179    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.771253    3771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:14:16.771267    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.771275    3771 round_trippers.go:580]     Audit-Id: b4fcc658-b638-461b-b6a6-32eeeb37e205
	I1212 15:14:16.771300    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.771311    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.771335    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.771345    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.771352    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:16 GMT
	I1212 15:14:16.771484    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:16.771740    3771 pod_ready.go:97] node "multinode-449000" hosting pod "kube-controller-manager-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.771753    3771 pod_ready.go:81] duration metric: took 243.423504ms waiting for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E1212 15:14:16.771761    3771 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-controller-manager-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:16.771770    3771 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxq22" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:16.969310    3771 request.go:629] Waited for 197.489796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxq22
	I1212 15:14:16.969409    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxq22
	I1212 15:14:16.969420    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:16.969432    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:16.969442    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:16.972212    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:16.972223    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:16.972229    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:16.972238    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:16.972247    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:16.972255    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:16.972262    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:17 GMT
	I1212 15:14:16.972268    3771 round_trippers.go:580]     Audit-Id: d49b3cb7-8196-44af-8b58-f4387871ddbb
	I1212 15:14:16.972563    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxq22","generateName":"kube-proxy-","namespace":"kube-system","uid":"d330b0b4-7d3f-4386-a72d-cb235945c494","resourceVersion":"473","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"baac289e-d94d-427e-ad81-e4b30512f118","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"baac289e-d94d-427e-ad81-e4b30512f118\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5924 chars]
	I1212 15:14:17.168265    3771 request.go:629] Waited for 195.31886ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.168378    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.168387    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:17.168399    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:17.168410    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:17.171054    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:17.171074    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:17.171092    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:17.171102    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:17.171111    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:17.171121    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:17.171132    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:17 GMT
	I1212 15:14:17.171145    3771 round_trippers.go:580]     Audit-Id: 002eb82f-f62d-4997-b439-88b8bfeb208b
	I1212 15:14:17.171358    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:17.171626    3771 pod_ready.go:97] node "multinode-449000" hosting pod "kube-proxy-hxq22" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:17.171643    3771 pod_ready.go:81] duration metric: took 399.866049ms waiting for pod "kube-proxy-hxq22" in "kube-system" namespace to be "Ready" ...
	E1212 15:14:17.171652    3771 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-proxy-hxq22" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:17.171663    3771 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:17.368369    3771 request.go:629] Waited for 196.665198ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-449000
	I1212 15:14:17.368410    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-449000
	I1212 15:14:17.368453    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:17.368461    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:17.368473    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:17.370345    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:17.370355    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:17.370360    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:17.370366    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:17.370373    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:17 GMT
	I1212 15:14:17.370381    3771 round_trippers.go:580]     Audit-Id: 38f0eb5a-b257-4b52-8017-60accda015cb
	I1212 15:14:17.370389    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:17.370394    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:17.370486    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-449000","namespace":"kube-system","uid":"6eda8382-3903-4ab4-96fb-afc4948c144b","resourceVersion":"476","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d002db3a6af46c2d870b0132a00cfc72","kubernetes.io/config.mirror":"d002db3a6af46c2d870b0132a00cfc72","kubernetes.io/config.seen":"2023-12-12T23:13:04.726764045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5144 chars]
	I1212 15:14:17.567826    3771 request.go:629] Waited for 197.099493ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.567933    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.567941    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:17.567949    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:17.567957    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:17.569858    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:17.569868    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:17.569874    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:17.569879    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:17.569884    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:17 GMT
	I1212 15:14:17.569889    3771 round_trippers.go:580]     Audit-Id: 3acead1d-be8e-4f43-949c-311a9aaa5840
	I1212 15:14:17.569894    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:17.569898    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:17.569987    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:17.570182    3771 pod_ready.go:97] node "multinode-449000" hosting pod "kube-scheduler-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:17.570191    3771 pod_ready.go:81] duration metric: took 398.524684ms waiting for pod "kube-scheduler-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E1212 15:14:17.570198    3771 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-scheduler-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I1212 15:14:17.570203    3771 pod_ready.go:38] duration metric: took 1.055593046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 15:14:17.570218    3771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 15:14:17.577340    3771 command_runner.go:130] > -16
	I1212 15:14:17.577450    3771 ops.go:34] apiserver oom_adj: -16
	I1212 15:14:17.577467    3771 kubeadm.go:640] restartCluster took 18.276944278s
	I1212 15:14:17.577473    3771 kubeadm.go:406] StartCluster complete in 18.295831765s
	I1212 15:14:17.577484    3771 settings.go:142] acquiring lock: {Name:mka464ae20beabe0956367b7c096b2df64ddda96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:14:17.577554    3771 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:14:17.578006    3771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/kubeconfig: {Name:mk59d3fcca7c93e43d82a40f16bbb777946cd182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:14:17.578246    3771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 15:14:17.578277    3771 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 15:14:17.578314    3771 addons.go:69] Setting storage-provisioner=true in profile "multinode-449000"
	I1212 15:14:17.578329    3771 addons.go:231] Setting addon storage-provisioner=true in "multinode-449000"
	W1212 15:14:17.578363    3771 addons.go:240] addon storage-provisioner should already be in state true
	I1212 15:14:17.578381    3771 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:14:17.578328    3771 addons.go:69] Setting default-storageclass=true in profile "multinode-449000"
	I1212 15:14:17.578400    3771 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:14:17.578426    3771 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-449000"
	I1212 15:14:17.578627    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:17.578642    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:17.578685    3771 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:14:17.578764    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:17.578784    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:17.579488    3771 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 15:14:17.581424    3771 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 15:14:17.581750    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:17.581758    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:17.581763    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:17.583513    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:17.583524    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:17.583530    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:17.583536    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:17.583542    3771 round_trippers.go:580]     Content-Length: 291
	I1212 15:14:17.583549    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:17 GMT
	I1212 15:14:17.583555    3771 round_trippers.go:580]     Audit-Id: de4a74f9-b68f-4c0e-a740-06bd4b0c5440
	I1212 15:14:17.583559    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:17.583564    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:17.583585    3771 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f736b503-d037-4c88-b91e-8a6459d1e321","resourceVersion":"492","creationTimestamp":"2023-12-12T23:13:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 15:14:17.583710    3771 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-449000" context rescaled to 1 replicas
	I1212 15:14:17.583731    3771 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:14:17.628317    3771 out.go:177] * Verifying Kubernetes components...
	I1212 15:14:17.587541    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51387
	I1212 15:14:17.587841    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51388
	I1212 15:14:17.639169    3771 command_runner.go:130] > apiVersion: v1
	I1212 15:14:17.649114    3771 command_runner.go:130] > data:
	I1212 15:14:17.649119    3771 command_runner.go:130] >   Corefile: |
	I1212 15:14:17.649123    3771 command_runner.go:130] >     .:53 {
	I1212 15:14:17.649126    3771 command_runner.go:130] >         log
	I1212 15:14:17.649131    3771 command_runner.go:130] >         errors
	I1212 15:14:17.649135    3771 command_runner.go:130] >         health {
	I1212 15:14:17.649139    3771 command_runner.go:130] >            lameduck 5s
	I1212 15:14:17.649142    3771 command_runner.go:130] >         }
	I1212 15:14:17.649147    3771 command_runner.go:130] >         ready
	I1212 15:14:17.649152    3771 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 15:14:17.649156    3771 command_runner.go:130] >            pods insecure
	I1212 15:14:17.649163    3771 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 15:14:17.649168    3771 command_runner.go:130] >            ttl 30
	I1212 15:14:17.649167    3771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 15:14:17.649172    3771 command_runner.go:130] >         }
	I1212 15:14:17.649186    3771 command_runner.go:130] >         prometheus :9153
	I1212 15:14:17.649193    3771 command_runner.go:130] >         hosts {
	I1212 15:14:17.649220    3771 command_runner.go:130] >            192.169.0.1 host.minikube.internal
	I1212 15:14:17.649228    3771 command_runner.go:130] >            fallthrough
	I1212 15:14:17.649232    3771 command_runner.go:130] >         }
	I1212 15:14:17.649241    3771 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 15:14:17.649256    3771 command_runner.go:130] >            max_concurrent 1000
	I1212 15:14:17.649263    3771 command_runner.go:130] >         }
	I1212 15:14:17.649267    3771 command_runner.go:130] >         cache 30
	I1212 15:14:17.649271    3771 command_runner.go:130] >         loop
	I1212 15:14:17.649274    3771 command_runner.go:130] >         reload
	I1212 15:14:17.649277    3771 command_runner.go:130] >         loadbalance
	I1212 15:14:17.649280    3771 command_runner.go:130] >     }
	I1212 15:14:17.649284    3771 command_runner.go:130] > kind: ConfigMap
	I1212 15:14:17.649287    3771 command_runner.go:130] > metadata:
	I1212 15:14:17.649291    3771 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:13:04Z"
	I1212 15:14:17.649295    3771 command_runner.go:130] >   name: coredns
	I1212 15:14:17.649299    3771 command_runner.go:130] >   namespace: kube-system
	I1212 15:14:17.649304    3771 command_runner.go:130] >   resourceVersion: "368"
	I1212 15:14:17.649308    3771 command_runner.go:130] >   uid: f9bb7a70-2db5-4a8f-90d5-b8bc77095680
	I1212 15:14:17.649391    3771 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 15:14:17.649580    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:17.649631    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:17.649969    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:14:17.649980    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:17.650041    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:14:17.650051    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:17.650222    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:17.650264    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:17.650381    3771 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:14:17.650482    3771 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:14:17.650543    3771 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3784
	I1212 15:14:17.650642    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:17.650660    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:17.652813    3771 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:14:17.653053    3771 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 15:14:17.653268    3771 addons.go:231] Setting addon default-storageclass=true in "multinode-449000"
	W1212 15:14:17.653281    3771 addons.go:240] addon default-storageclass should already be in state true
	I1212 15:14:17.653294    3771 host.go:66] Checking if "multinode-449000" exists ...
	I1212 15:14:17.653549    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:17.653570    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:17.659200    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51391
	I1212 15:14:17.659558    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:17.659955    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:14:17.659974    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:17.660171    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:17.660282    3771 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:14:17.660385    3771 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:14:17.660452    3771 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3784
	I1212 15:14:17.661444    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:14:17.682175    3771 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 15:14:17.661681    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51393
	I1212 15:14:17.662009    3771 node_ready.go:35] waiting up to 6m0s for node "multinode-449000" to be "Ready" ...
	I1212 15:14:17.703417    3771 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 15:14:17.703434    3771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 15:14:17.703453    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:14:17.703657    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:14:17.703835    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:14:17.703922    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:17.704022    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:14:17.704192    3771 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:14:17.704511    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:14:17.704531    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:17.704927    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:17.705469    3771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:14:17.705494    3771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:14:17.713944    3771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51396
	I1212 15:14:17.714266    3771 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:14:17.714595    3771 main.go:141] libmachine: Using API Version  1
	I1212 15:14:17.714606    3771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:14:17.714816    3771 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:14:17.714919    3771 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I1212 15:14:17.715004    3771 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:14:17.715074    3771 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 3784
	I1212 15:14:17.716036    3771 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I1212 15:14:17.716198    3771 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 15:14:17.716207    3771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 15:14:17.716217    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I1212 15:14:17.716307    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I1212 15:14:17.716385    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I1212 15:14:17.716469    3771 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I1212 15:14:17.716554    3771 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I1212 15:14:17.764446    3771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 15:14:17.767675    3771 request.go:629] Waited for 64.383295ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.767701    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.767708    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:17.767716    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:17.767723    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:17.769169    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:17.769180    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:17.769186    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:17.769200    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:17.769206    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:17.769211    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:17 GMT
	I1212 15:14:17.769215    3771 round_trippers.go:580]     Audit-Id: 4f1fc1e7-20cb-4c57-84ee-7e3db9f19c8e
	I1212 15:14:17.769220    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:17.769354    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:17.775968    3771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 15:14:17.968441    3771 request.go:629] Waited for 198.838924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.968482    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:17.968486    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:17.968492    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:17.968537    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:17.970430    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:17.970443    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:17.970450    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:17.970456    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:17.970461    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:17.970466    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:17.970471    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:18 GMT
	I1212 15:14:17.970476    3771 round_trippers.go:580]     Audit-Id: 55cd5cfe-be5b-474c-bb30-4c803e207717
	I1212 15:14:17.970550    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:18.152926    3771 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1212 15:14:18.152941    3771 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1212 15:14:18.152947    3771 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1212 15:14:18.152953    3771 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1212 15:14:18.152957    3771 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1212 15:14:18.152965    3771 command_runner.go:130] > pod/storage-provisioner configured
	I1212 15:14:18.152988    3771 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1212 15:14:18.153011    3771 main.go:141] libmachine: Making call to close driver server
	I1212 15:14:18.153018    3771 main.go:141] libmachine: Making call to close driver server
	I1212 15:14:18.153024    3771 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:14:18.153025    3771 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:14:18.153184    3771 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:14:18.153192    3771 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:14:18.153198    3771 main.go:141] libmachine: Making call to close driver server
	I1212 15:14:18.153202    3771 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:14:18.153202    3771 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:14:18.153221    3771 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:14:18.153221    3771 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:14:18.153238    3771 main.go:141] libmachine: Making call to close driver server
	I1212 15:14:18.153248    3771 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:14:18.153244    3771 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:14:18.153327    3771 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:14:18.153392    3771 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:14:18.153391    3771 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:14:18.153402    3771 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:14:18.153403    3771 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:14:18.153407    3771 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:14:18.153515    3771 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 15:14:18.153521    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:18.153532    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:18.153539    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:18.155253    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:18.155262    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:18.155269    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:18.155274    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:18.155279    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:18.155286    3771 round_trippers.go:580]     Content-Length: 1273
	I1212 15:14:18.155290    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:18 GMT
	I1212 15:14:18.155295    3771 round_trippers.go:580]     Audit-Id: 3ab8b7ef-487b-4e8f-9e1e-7162999f7e9a
	I1212 15:14:18.155300    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:18.155351    3771 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"standard","uid":"20fb0e5b-d511-4ab4-8113-6d7f1494ee7b","resourceVersion":"369","creationTimestamp":"2023-12-12T23:13:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:13:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 15:14:18.155702    3771 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"20fb0e5b-d511-4ab4-8113-6d7f1494ee7b","resourceVersion":"369","creationTimestamp":"2023-12-12T23:13:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:13:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 15:14:18.155734    3771 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 15:14:18.155740    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:18.155745    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:18.155751    3771 round_trippers.go:473]     Content-Type: application/json
	I1212 15:14:18.155757    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:18.157748    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:18.157758    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:18.157764    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:18.157769    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:18.157774    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:18.157778    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:18.157783    3771 round_trippers.go:580]     Content-Length: 1220
	I1212 15:14:18.157788    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:18 GMT
	I1212 15:14:18.157793    3771 round_trippers.go:580]     Audit-Id: 0fd53125-b9f5-48ad-9d6a-3b2480f1e886
	I1212 15:14:18.157835    3771 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"20fb0e5b-d511-4ab4-8113-6d7f1494ee7b","resourceVersion":"369","creationTimestamp":"2023-12-12T23:13:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:13:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 15:14:18.157907    3771 main.go:141] libmachine: Making call to close driver server
	I1212 15:14:18.157916    3771 main.go:141] libmachine: (multinode-449000) Calling .Close
	I1212 15:14:18.158044    3771 main.go:141] libmachine: (multinode-449000) DBG | Closing plugin on server side
	I1212 15:14:18.158052    3771 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:14:18.158059    3771 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:14:18.179534    3771 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 15:14:18.221437    3771 addons.go:502] enable addons completed in 643.156795ms: enabled=[storage-provisioner default-storageclass]
	I1212 15:14:18.471558    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:18.471584    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:18.471598    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:18.471608    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:18.474312    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:18.474326    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:18.474334    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:18.474340    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:18 GMT
	I1212 15:14:18.474347    3771 round_trippers.go:580]     Audit-Id: d6eb00cf-95d4-41b4-b7f9-f40d625ac07a
	I1212 15:14:18.474372    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:18.474380    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:18.474386    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:18.474536    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:18.971025    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:18.971048    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:18.971061    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:18.971105    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:18.973715    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:18.973735    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:18.973743    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:18.973750    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:18.973758    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:19 GMT
	I1212 15:14:18.973764    3771 round_trippers.go:580]     Audit-Id: c35de591-7cb2-4e94-b259-2a6e51d00d3c
	I1212 15:14:18.973770    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:18.973776    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:18.973898    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:19.472208    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:19.472240    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:19.472333    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:19.472345    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:19.474937    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:19.474952    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:19.474960    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:19.474966    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:19.474976    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:19.475006    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:19.475020    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:19 GMT
	I1212 15:14:19.475031    3771 round_trippers.go:580]     Audit-Id: 91a6a528-2ef4-4c45-93bf-5ec44f311d28
	I1212 15:14:19.475439    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:19.972738    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:19.972770    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:19.972782    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:19.972792    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:19.975800    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:19.975831    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:19.975856    3771 round_trippers.go:580]     Audit-Id: 8eb337a1-8685-485f-ba92-fe97f239aac5
	I1212 15:14:19.975865    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:19.975871    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:19.975878    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:19.975884    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:19.975891    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:20 GMT
	I1212 15:14:19.976015    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:19.976289    3771 node_ready.go:58] node "multinode-449000" has status "Ready":"False"
	I1212 15:14:20.471825    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:20.471850    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:20.471863    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:20.471872    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:20.475040    3771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:14:20.475055    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:20.475063    3771 round_trippers.go:580]     Audit-Id: c1bf2eda-e021-437e-8f09-07916e419010
	I1212 15:14:20.475084    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:20.475098    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:20.475122    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:20.475131    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:20.475137    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:20 GMT
	I1212 15:14:20.475473    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:20.972259    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:20.972289    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:20.972373    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:20.972386    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:20.975273    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:20.975289    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:20.975297    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:20.975303    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:20.975309    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:20.975316    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:20.975322    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:21 GMT
	I1212 15:14:20.975328    3771 round_trippers.go:580]     Audit-Id: 24ef5b2d-bc97-483a-bba4-020b337d70fe
	I1212 15:14:20.975427    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:21.471996    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:21.472018    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:21.472032    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:21.472046    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:21.474586    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:21.474601    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:21.474608    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:21.474616    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:21.474624    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:21.474630    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:21.474637    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:21 GMT
	I1212 15:14:21.474643    3771 round_trippers.go:580]     Audit-Id: af457a72-5f4c-4456-916b-e2a450046dc4
	I1212 15:14:21.474744    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:21.970871    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:21.970888    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:21.970895    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:21.970900    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:21.973520    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:21.973533    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:21.973541    3771 round_trippers.go:580]     Audit-Id: 9f55546b-38fa-4786-a4c5-3054cc50a314
	I1212 15:14:21.973549    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:21.973566    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:21.973574    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:21.973580    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:21.973590    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:22 GMT
	I1212 15:14:21.973674    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:22.470944    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:22.470958    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:22.470965    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:22.470970    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:22.473359    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:22.473368    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:22.473373    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:22.473384    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:22.473389    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:22.473394    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:22.473399    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:22 GMT
	I1212 15:14:22.473404    3771 round_trippers.go:580]     Audit-Id: dbc7940c-dd40-44f6-9dcb-1501e1f1f14c
	I1212 15:14:22.473492    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:22.473689    3771 node_ready.go:58] node "multinode-449000" has status "Ready":"False"
	I1212 15:14:22.972090    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:22.972117    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:22.972130    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:22.972141    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:22.974841    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:22.974857    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:22.974865    3771 round_trippers.go:580]     Audit-Id: 8e48a583-0c9f-4c5d-a6f3-98c670db62c0
	I1212 15:14:22.974871    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:22.974901    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:22.974914    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:22.974921    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:22.974928    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:23 GMT
	I1212 15:14:22.975069    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:23.472109    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:23.472158    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:23.472172    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:23.472181    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:23.474230    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:23.474245    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:23.474254    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:23.474260    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:23.474266    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:23.474273    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:23.474280    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:23 GMT
	I1212 15:14:23.474287    3771 round_trippers.go:580]     Audit-Id: 94db6ef7-94e3-4550-88bb-3c88eb2da56b
	I1212 15:14:23.474427    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:23.971845    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:23.971872    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:23.971885    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:23.971920    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:23.974647    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:23.974661    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:23.974676    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:23.974687    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:23.974703    3771 round_trippers.go:580]     Audit-Id: 258392b4-af48-4732-9176-deb291ef69e2
	I1212 15:14:23.974713    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:23.974721    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:23.974731    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:23.975112    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"434","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 15:14:24.472857    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:24.472875    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.472885    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.472893    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.474879    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.474890    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.474895    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.474900    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.474904    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.474922    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.474930    3771 round_trippers.go:580]     Audit-Id: 0b1be6d0-43ac-40b9-b5e0-be23fa802bb1
	I1212 15:14:24.474947    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.475106    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:24.475313    3771 node_ready.go:49] node "multinode-449000" has status "Ready":"True"
	I1212 15:14:24.475326    3771 node_ready.go:38] duration metric: took 6.772129502s waiting for node "multinode-449000" to be "Ready" ...
	I1212 15:14:24.475332    3771 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 15:14:24.475372    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:14:24.475381    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.475387    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.475393    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.477338    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.477349    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.477356    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.477372    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.477377    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.477381    3771 round_trippers.go:580]     Audit-Id: 4b2cd3b3-f137-4be9-bb0d-425ef18ea93e
	I1212 15:14:24.477387    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.477395    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.477969    3771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"509","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56446 chars]
	I1212 15:14:24.479280    3771 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:24.479314    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gbw2q
	I1212 15:14:24.479322    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.479328    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.479333    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.480643    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.480653    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.480658    3771 round_trippers.go:580]     Audit-Id: 611c47bc-2074-48ac-92c0-21132dafab22
	I1212 15:14:24.480663    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.480668    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.480672    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.480677    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.480682    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.480811    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"509","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6489 chars]
	I1212 15:14:24.481053    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:24.481060    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.481066    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.481071    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.482164    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.482171    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.482180    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.482185    3771 round_trippers.go:580]     Audit-Id: d77352fc-4138-4efc-9921-2ed312027bb2
	I1212 15:14:24.482191    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.482198    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.482205    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.482213    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.482414    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:24.482606    3771 pod_ready.go:92] pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace has status "Ready":"True"
	I1212 15:14:24.482615    3771 pod_ready.go:81] duration metric: took 3.324858ms waiting for pod "coredns-5dd5756b68-gbw2q" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:24.482620    3771 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:24.482649    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I1212 15:14:24.482657    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.482663    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.482669    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.483841    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.483851    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.483858    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.483865    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.483872    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.483878    3771 round_trippers.go:580]     Audit-Id: 7685c559-18ef-405b-a732-2e15629082d3
	I1212 15:14:24.483882    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.483887    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.484093    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"193c5da5-9957-4b0c-ac1f-0883f287dc0d","resourceVersion":"511","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.mirror":"1a832df13b4e9773d7a6b67fbfc8fb00","kubernetes.io/config.seen":"2023-12-12T23:13:04.726760505Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1212 15:14:24.484317    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:24.484324    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.484329    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.484335    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.485543    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.485551    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.485556    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.485564    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.485580    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.485591    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.485607    3771 round_trippers.go:580]     Audit-Id: ce44f14e-cd9f-4a9e-b02c-651481321bc9
	I1212 15:14:24.485615    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.485705    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:24.485868    3771 pod_ready.go:92] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:14:24.485878    3771 pod_ready.go:81] duration metric: took 3.25286ms waiting for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:24.485885    3771 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:24.485910    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I1212 15:14:24.485914    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.485923    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.485930    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.487087    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.487105    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.487127    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.487135    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.487140    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.487145    3771 round_trippers.go:580]     Audit-Id: 32f1bf82-c944-4c4b-8caa-d43136081f1c
	I1212 15:14:24.487150    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.487155    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.487388    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"d0340375-33dc-42b7-9b1d-6e66ff24d07b","resourceVersion":"474","creationTimestamp":"2023-12-12T23:13:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.mirror":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.seen":"2023-12-12T23:12:58.089999663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7841 chars]
	I1212 15:14:24.487635    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:24.487644    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.487650    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.487656    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.488857    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.488866    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.488872    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.488883    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.488891    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.488895    3771 round_trippers.go:580]     Audit-Id: e2d0c05f-3937-4a2a-9419-fe5525763c7e
	I1212 15:14:24.488900    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.488904    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.489008    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:24.489197    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I1212 15:14:24.489204    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.489209    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.489215    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.490463    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.490472    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.490477    3771 round_trippers.go:580]     Audit-Id: aaeb4413-d58d-4ad7-b583-61c6a38974af
	I1212 15:14:24.490482    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.490487    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.490492    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.490496    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.490501    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.490783    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"d0340375-33dc-42b7-9b1d-6e66ff24d07b","resourceVersion":"474","creationTimestamp":"2023-12-12T23:13:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.mirror":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.seen":"2023-12-12T23:12:58.089999663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7841 chars]
	I1212 15:14:24.491023    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:24.491030    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.491036    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.491042    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.492161    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.492170    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.492176    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:24 GMT
	I1212 15:14:24.492180    3771 round_trippers.go:580]     Audit-Id: 06b0f3d0-9905-400a-81f4-ffcea79e0282
	I1212 15:14:24.492185    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.492189    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.492194    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.492199    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.492319    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:24.992660    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I1212 15:14:24.992683    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.992695    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.992705    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.995526    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:24.995544    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.995558    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.995573    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.995585    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.995596    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:25 GMT
	I1212 15:14:24.995607    3771 round_trippers.go:580]     Audit-Id: 178c0fe8-5ad7-4f70-b6c2-de0633c8e798
	I1212 15:14:24.995618    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.995886    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"d0340375-33dc-42b7-9b1d-6e66ff24d07b","resourceVersion":"474","creationTimestamp":"2023-12-12T23:13:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.mirror":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.seen":"2023-12-12T23:12:58.089999663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7841 chars]
	I1212 15:14:24.996258    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:24.996268    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:24.996276    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:24.996283    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:24.997917    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:24.997937    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:24.997943    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:24.997948    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:24.997952    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:24.997957    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:24.997966    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:25 GMT
	I1212 15:14:24.997972    3771 round_trippers.go:580]     Audit-Id: 20d8b6fb-d590-471d-800f-784628a0e382
	I1212 15:14:24.998051    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:25.492843    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I1212 15:14:25.492886    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:25.492896    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:25.492904    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:25.495047    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:25.495059    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:25.495067    3771 round_trippers.go:580]     Audit-Id: 475aa92c-8a1a-4106-b9e0-502678df4b72
	I1212 15:14:25.495074    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:25.495082    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:25.495093    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:25.495124    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:25.495132    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:25 GMT
	I1212 15:14:25.495408    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"d0340375-33dc-42b7-9b1d-6e66ff24d07b","resourceVersion":"517","creationTimestamp":"2023-12-12T23:13:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.mirror":"713a71f0e8f1e4f4a127fa5f9adf437f","kubernetes.io/config.seen":"2023-12-12T23:12:58.089999663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7597 chars]
	I1212 15:14:25.495676    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:25.495683    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:25.495689    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:25.495694    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:25.497050    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:25.497063    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:25.497069    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:25 GMT
	I1212 15:14:25.497075    3771 round_trippers.go:580]     Audit-Id: 9e1e386a-995d-4590-8d20-44ecd960ee27
	I1212 15:14:25.497086    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:25.497095    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:25.497103    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:25.497117    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:25.497386    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:25.497565    3771 pod_ready.go:92] pod "kube-apiserver-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:14:25.497577    3771 pod_ready.go:81] duration metric: took 1.011690457s waiting for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:25.497586    3771 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:25.497615    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:25.497619    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:25.497625    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:25.497630    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:25.498870    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:25.498882    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:25.498890    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:25.498896    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:25.498901    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:25.498905    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:25 GMT
	I1212 15:14:25.498910    3771 round_trippers.go:580]     Audit-Id: 0c1457a2-30c4-4cc1-b2d0-ae60d8b1a7ba
	I1212 15:14:25.498916    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:25.498996    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"475","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 15:14:25.673141    3771 request.go:629] Waited for 173.890159ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:25.673222    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:25.673233    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:25.673254    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:25.673265    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:25.676202    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:25.676215    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:25.676222    3771 round_trippers.go:580]     Audit-Id: a6463b96-15dc-4c24-9fec-23e66c14c148
	I1212 15:14:25.676229    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:25.676236    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:25.676242    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:25.676249    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:25.676258    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:25 GMT
	I1212 15:14:25.676338    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:25.873300    3771 request.go:629] Waited for 196.63828ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:25.873385    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:25.873394    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:25.873405    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:25.873415    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:25.876104    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:25.876121    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:25.876134    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:25.876141    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:25.876148    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:25.876157    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:25.876164    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:25 GMT
	I1212 15:14:25.876182    3771 round_trippers.go:580]     Audit-Id: 481ed2a7-22f5-4119-97a8-79555f774d3c
	I1212 15:14:25.876284    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"475","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 15:14:26.073068    3771 request.go:629] Waited for 196.418933ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:26.073106    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:26.073111    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:26.073117    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:26.073122    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:26.074593    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:26.074603    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:26.074608    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:26.074613    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:26.074617    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:26.074622    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:26.074629    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:26 GMT
	I1212 15:14:26.074633    3771 round_trippers.go:580]     Audit-Id: 7d78e3ef-bf1a-4179-8c17-b6311fff4590
	I1212 15:14:26.074707    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:26.575676    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:26.575731    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:26.575745    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:26.575757    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:26.578170    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:26.578183    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:26.578191    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:26 GMT
	I1212 15:14:26.578198    3771 round_trippers.go:580]     Audit-Id: 4146693c-e1df-4870-b57a-d627eedcf6f8
	I1212 15:14:26.578204    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:26.578211    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:26.578219    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:26.578230    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:26.578488    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"475","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 15:14:26.578847    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:26.578858    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:26.578871    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:26.578879    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:26.580017    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:26.580024    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:26.580029    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:26.580043    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:26.580054    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:26 GMT
	I1212 15:14:26.580059    3771 round_trippers.go:580]     Audit-Id: 9a0d3200-54ea-4d4c-83b8-49672a88148b
	I1212 15:14:26.580064    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:26.580069    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:26.580187    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:27.076438    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:27.076478    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:27.076501    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:27.076506    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:27.077990    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:27.078004    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:27.078011    3771 round_trippers.go:580]     Audit-Id: a887b05e-3302-41f1-8e88-8151618b459b
	I1212 15:14:27.078028    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:27.078037    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:27.078049    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:27.078055    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:27.078060    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:27 GMT
	I1212 15:14:27.078171    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"475","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 15:14:27.078452    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:27.078459    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:27.078465    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:27.078470    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:27.079864    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:27.079873    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:27.079882    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:27.079887    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:27.079891    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:27.079896    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:27 GMT
	I1212 15:14:27.079900    3771 round_trippers.go:580]     Audit-Id: 24adbba6-62b0-4e1f-bb4a-2471a6b871ea
	I1212 15:14:27.079905    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:27.080089    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:27.575318    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:27.575340    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:27.575355    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:27.575365    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:27.578022    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:27.578037    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:27.578044    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:27 GMT
	I1212 15:14:27.578051    3771 round_trippers.go:580]     Audit-Id: e8962a8f-1f86-401f-be09-0effcbc94ca2
	I1212 15:14:27.578058    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:27.578065    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:27.578071    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:27.578078    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:27.578228    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"475","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 15:14:27.578586    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:27.578596    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:27.578604    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:27.578611    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:27.580244    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:27.580253    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:27.580259    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:27.580268    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:27.580275    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:27 GMT
	I1212 15:14:27.580280    3771 round_trippers.go:580]     Audit-Id: 84b24e27-a638-44d0-ab96-cbc355b83268
	I1212 15:14:27.580285    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:27.580295    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:27.580554    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:27.580738    3771 pod_ready.go:102] pod "kube-controller-manager-multinode-449000" in "kube-system" namespace has status "Ready":"False"
	I1212 15:14:28.077102    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:28.077124    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.077136    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.077164    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.080564    3771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:14:28.080580    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.080588    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.080594    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.080602    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.080627    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.080647    3771 round_trippers.go:580]     Audit-Id: a71c1596-43df-4957-ace5-4a2d2f4246cf
	I1212 15:14:28.080664    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.080884    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"475","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 15:14:28.081249    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:28.081257    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.081263    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.081267    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.082885    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:28.082894    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.082903    3771 round_trippers.go:580]     Audit-Id: 93e8a3cc-404f-4eeb-9004-a2816782cc8d
	I1212 15:14:28.082908    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.082913    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.082918    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.082922    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.082927    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.083096    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:28.575517    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I1212 15:14:28.575537    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.575549    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.575558    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.578667    3771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:14:28.578680    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.578687    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.578694    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.578699    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.578708    3771 round_trippers.go:580]     Audit-Id: ac92c9a4-e596-4ef4-bbb2-2648fc39de70
	I1212 15:14:28.578714    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.578720    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.578921    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"3cdec7d9-450b-47be-b93b-a5f3985415fa","resourceVersion":"524","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.mirror":"ff9ba650fd2dd54b1306fbad348194b4","kubernetes.io/config.seen":"2023-12-12T23:12:58.090000240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7170 chars]
	I1212 15:14:28.579271    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:28.579281    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.579289    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.579296    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.581130    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:28.581139    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.581146    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.581167    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.581179    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.581193    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.581201    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.581206    3771 round_trippers.go:580]     Audit-Id: b818dedf-71ae-4c4f-9e7c-83ce8a213160
	I1212 15:14:28.581389    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:28.581608    3771 pod_ready.go:92] pod "kube-controller-manager-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:14:28.581620    3771 pod_ready.go:81] duration metric: took 3.084047408s waiting for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:28.581630    3771 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxq22" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:28.581667    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxq22
	I1212 15:14:28.581672    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.581677    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.581683    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.583177    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:28.583184    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.583189    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.583194    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.583200    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.583204    3771 round_trippers.go:580]     Audit-Id: 7cde3aa4-0468-4e63-8c28-8f013945ee1c
	I1212 15:14:28.583209    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.583217    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.583524    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxq22","generateName":"kube-proxy-","namespace":"kube-system","uid":"d330b0b4-7d3f-4386-a72d-cb235945c494","resourceVersion":"501","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"baac289e-d94d-427e-ad81-e4b30512f118","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"baac289e-d94d-427e-ad81-e4b30512f118\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1212 15:14:28.583759    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:28.583767    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.583775    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.583781    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.585148    3771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 15:14:28.585157    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.585162    3771 round_trippers.go:580]     Audit-Id: f78f26e1-338e-4796-a65e-b2c2f8f60f55
	I1212 15:14:28.585168    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.585175    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.585182    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.585190    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.585197    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.585326    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:28.585502    3771 pod_ready.go:92] pod "kube-proxy-hxq22" in "kube-system" namespace has status "Ready":"True"
	I1212 15:14:28.585510    3771 pod_ready.go:81] duration metric: took 3.876003ms waiting for pod "kube-proxy-hxq22" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:28.585516    3771 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:28.674303    3771 request.go:629] Waited for 88.744621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-449000
	I1212 15:14:28.674371    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-449000
	I1212 15:14:28.674379    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.674390    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.674400    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.676831    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:28.676846    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.676853    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.676859    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.676866    3771 round_trippers.go:580]     Audit-Id: 2a21b4ab-f8ea-46d9-b254-2de1eaff745f
	I1212 15:14:28.676872    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.676883    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.676891    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.676960    3771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-449000","namespace":"kube-system","uid":"6eda8382-3903-4ab4-96fb-afc4948c144b","resourceVersion":"522","creationTimestamp":"2023-12-12T23:13:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d002db3a6af46c2d870b0132a00cfc72","kubernetes.io/config.mirror":"d002db3a6af46c2d870b0132a00cfc72","kubernetes.io/config.seen":"2023-12-12T23:13:04.726764045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1212 15:14:28.872899    3771 request.go:629] Waited for 195.630411ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:28.872976    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-449000
	I1212 15:14:28.872985    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.872996    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.873006    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.875665    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:28.875680    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.875687    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.875694    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.875701    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.875707    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.875712    3771 round_trippers.go:580]     Audit-Id: 28c73cc3-9f79-45df-9034-2b7c5949e019
	I1212 15:14:28.875719    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.876031    3771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:13:01Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 15:14:28.876269    3771 pod_ready.go:92] pod "kube-scheduler-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I1212 15:14:28.876281    3771 pod_ready.go:81] duration metric: took 290.759372ms waiting for pod "kube-scheduler-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I1212 15:14:28.876288    3771 pod_ready.go:38] duration metric: took 4.400974315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 15:14:28.876298    3771 api_server.go:52] waiting for apiserver process to appear ...
	I1212 15:14:28.876347    3771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:14:28.886435    3771 command_runner.go:130] > 1624
	I1212 15:14:28.886455    3771 api_server.go:72] duration metric: took 11.302782546s to wait for apiserver process to appear ...
	I1212 15:14:28.886460    3771 api_server.go:88] waiting for apiserver healthz status ...
	I1212 15:14:28.886468    3771 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 15:14:28.889815    3771 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 15:14:28.889845    3771 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I1212 15:14:28.889849    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:28.889855    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:28.889866    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:28.890583    3771 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 15:14:28.890593    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:28.890599    3771 round_trippers.go:580]     Audit-Id: 3c7bbab2-aceb-4a11-aee8-2cdd740917dc
	I1212 15:14:28.890605    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:28.890611    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:28.890627    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:28.890635    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:28.890642    3771 round_trippers.go:580]     Content-Length: 264
	I1212 15:14:28.890647    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:28 GMT
	I1212 15:14:28.890658    3771 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 15:14:28.890681    3771 api_server.go:141] control plane version: v1.28.4
	I1212 15:14:28.890690    3771 api_server.go:131] duration metric: took 4.226419ms to wait for apiserver health ...
	I1212 15:14:28.890695    3771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 15:14:29.072975    3771 request.go:629] Waited for 182.240253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:14:29.073070    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:14:29.073080    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:29.073092    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:29.073112    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:29.076712    3771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:14:29.076726    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:29.076734    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:29 GMT
	I1212 15:14:29.076743    3771 round_trippers.go:580]     Audit-Id: b38b3441-3773-43ed-812d-a1761961b122
	I1212 15:14:29.076753    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:29.076762    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:29.076772    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:29.076781    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:29.077999    3771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"524"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"509","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55696 chars]
	I1212 15:14:29.079313    3771 system_pods.go:59] 8 kube-system pods found
	I1212 15:14:29.079323    3771 system_pods.go:61] "coredns-5dd5756b68-gbw2q" [09d20e99-6d1a-46d5-858f-71585ab9e532] Running
	I1212 15:14:29.079327    3771 system_pods.go:61] "etcd-multinode-449000" [193c5da5-9957-4b0c-ac1f-0883f287dc0d] Running
	I1212 15:14:29.079330    3771 system_pods.go:61] "kindnet-zkv5v" [92e2a49a-0055-4ae7-a167-fb51b4275183] Running
	I1212 15:14:29.079338    3771 system_pods.go:61] "kube-apiserver-multinode-449000" [d0340375-33dc-42b7-9b1d-6e66ff24d07b] Running
	I1212 15:14:29.079342    3771 system_pods.go:61] "kube-controller-manager-multinode-449000" [3cdec7d9-450b-47be-b93b-a5f3985415fa] Running
	I1212 15:14:29.079346    3771 system_pods.go:61] "kube-proxy-hxq22" [d330b0b4-7d3f-4386-a72d-cb235945c494] Running
	I1212 15:14:29.079349    3771 system_pods.go:61] "kube-scheduler-multinode-449000" [6eda8382-3903-4ab4-96fb-afc4948c144b] Running
	I1212 15:14:29.079353    3771 system_pods.go:61] "storage-provisioner" [11d647a8-b7f7-411a-b861-f3d109085770] Running
	I1212 15:14:29.079357    3771 system_pods.go:74] duration metric: took 188.659719ms to wait for pod list to return data ...
	I1212 15:14:29.079362    3771 default_sa.go:34] waiting for default service account to be created ...
	I1212 15:14:29.273035    3771 request.go:629] Waited for 193.629476ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 15:14:29.273111    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 15:14:29.273121    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:29.273165    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:29.273177    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:29.275732    3771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 15:14:29.275747    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:29.275754    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:29.275761    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:29.275767    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:29.275773    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:29.275780    3771 round_trippers.go:580]     Content-Length: 261
	I1212 15:14:29.275787    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:29 GMT
	I1212 15:14:29.275794    3771 round_trippers.go:580]     Audit-Id: 9bc0c4c2-1a76-45b6-ad1c-d3153dfa11a7
	I1212 15:14:29.275821    3771 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"524"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2237e2f6-7ac7-4dd4-a02d-49acbeab0757","resourceVersion":"309","creationTimestamp":"2023-12-12T23:13:16Z"}}]}
	I1212 15:14:29.275961    3771 default_sa.go:45] found service account: "default"
	I1212 15:14:29.275974    3771 default_sa.go:55] duration metric: took 196.607606ms for default service account to be created ...
	I1212 15:14:29.275982    3771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 15:14:29.473147    3771 request.go:629] Waited for 197.067753ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:14:29.473203    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 15:14:29.473212    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:29.473224    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:29.473234    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:29.476786    3771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:14:29.476799    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:29.476807    3771 round_trippers.go:580]     Audit-Id: 28523b24-f53a-4bdd-8336-6f2bd831c16a
	I1212 15:14:29.476813    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:29.476821    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:29.476827    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:29.476834    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:29.476841    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:29 GMT
	I1212 15:14:29.477504    3771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"524"},"items":[{"metadata":{"name":"coredns-5dd5756b68-gbw2q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"09d20e99-6d1a-46d5-858f-71585ab9e532","resourceVersion":"509","creationTimestamp":"2023-12-12T23:13:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a4c42d0c-8d38-4a37-89f9-122c8abf6177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:13:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4c42d0c-8d38-4a37-89f9-122c8abf6177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55696 chars]
	I1212 15:14:29.478864    3771 system_pods.go:86] 8 kube-system pods found
	I1212 15:14:29.478875    3771 system_pods.go:89] "coredns-5dd5756b68-gbw2q" [09d20e99-6d1a-46d5-858f-71585ab9e532] Running
	I1212 15:14:29.478879    3771 system_pods.go:89] "etcd-multinode-449000" [193c5da5-9957-4b0c-ac1f-0883f287dc0d] Running
	I1212 15:14:29.478883    3771 system_pods.go:89] "kindnet-zkv5v" [92e2a49a-0055-4ae7-a167-fb51b4275183] Running
	I1212 15:14:29.478887    3771 system_pods.go:89] "kube-apiserver-multinode-449000" [d0340375-33dc-42b7-9b1d-6e66ff24d07b] Running
	I1212 15:14:29.478891    3771 system_pods.go:89] "kube-controller-manager-multinode-449000" [3cdec7d9-450b-47be-b93b-a5f3985415fa] Running
	I1212 15:14:29.478896    3771 system_pods.go:89] "kube-proxy-hxq22" [d330b0b4-7d3f-4386-a72d-cb235945c494] Running
	I1212 15:14:29.478899    3771 system_pods.go:89] "kube-scheduler-multinode-449000" [6eda8382-3903-4ab4-96fb-afc4948c144b] Running
	I1212 15:14:29.478903    3771 system_pods.go:89] "storage-provisioner" [11d647a8-b7f7-411a-b861-f3d109085770] Running
	I1212 15:14:29.478907    3771 system_pods.go:126] duration metric: took 202.92259ms to wait for k8s-apps to be running ...
	I1212 15:14:29.478913    3771 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 15:14:29.478961    3771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 15:14:29.488032    3771 system_svc.go:56] duration metric: took 9.109772ms WaitForService to wait for kubelet.
	I1212 15:14:29.488044    3771 kubeadm.go:581] duration metric: took 11.904376095s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 15:14:29.488056    3771 node_conditions.go:102] verifying NodePressure condition ...
	I1212 15:14:29.674300    3771 request.go:629] Waited for 186.100242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I1212 15:14:29.674357    3771 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I1212 15:14:29.674365    3771 round_trippers.go:469] Request Headers:
	I1212 15:14:29.674383    3771 round_trippers.go:473]     Accept: application/json, */*
	I1212 15:14:29.674394    3771 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 15:14:29.677497    3771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 15:14:29.677512    3771 round_trippers.go:577] Response Headers:
	I1212 15:14:29.677519    3771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 15:14:29.677525    3771 round_trippers.go:580]     Content-Type: application/json
	I1212 15:14:29.677531    3771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e29f512c-9329-4cd8-9305-82ad26aea980
	I1212 15:14:29.677540    3771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ddf0869-55a0-47cb-b7ef-d98ab66cac66
	I1212 15:14:29.677549    3771 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:29 GMT
	I1212 15:14:29.677559    3771 round_trippers.go:580]     Audit-Id: 738f2239-ec2f-46b7-8254-aeb9bf085e13
	I1212 15:14:29.677665    3771 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"524"},"items":[{"metadata":{"name":"multinode-449000","uid":"ef12e061-4098-4e16-9f9d-81f611fa0b4f","resourceVersion":"513","creationTimestamp":"2023-12-12T23:13:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T15_13_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5056 chars]
	I1212 15:14:29.677925    3771 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 15:14:29.677940    3771 node_conditions.go:123] node cpu capacity is 2
	I1212 15:14:29.677948    3771 node_conditions.go:105] duration metric: took 189.889369ms to run NodePressure ...
	I1212 15:14:29.677957    3771 start.go:228] waiting for startup goroutines ...
	I1212 15:14:29.677963    3771 start.go:233] waiting for cluster config update ...
	I1212 15:14:29.677975    3771 start.go:242] writing updated cluster config ...
	I1212 15:14:29.678370    3771 ssh_runner.go:195] Run: rm -f paused
	I1212 15:14:29.717251    3771 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1212 15:14:29.737996    3771 out.go:177] * Done! kubectl is now configured to use "multinode-449000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:13:51 UTC, ends at Tue 2023-12-12 23:14:30 UTC. --
	Dec 12 23:14:14 multinode-449000 dockerd[829]: time="2023-12-12T23:14:14.741047170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:14 multinode-449000 cri-dockerd[1027]: time="2023-12-12T23:14:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd8c1a2625482be1dd7888a747109baf826ed6eb5c387c599b9d708506c7a49c/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:14:14 multinode-449000 dockerd[829]: time="2023-12-12T23:14:14.941443436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:14 multinode-449000 dockerd[829]: time="2023-12-12T23:14:14.941565193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:14 multinode-449000 dockerd[829]: time="2023-12-12T23:14:14.941593339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:14 multinode-449000 dockerd[829]: time="2023-12-12T23:14:14.941612952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:15 multinode-449000 cri-dockerd[1027]: time="2023-12-12T23:14:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7cffdc22a3f43f092b053882267f41dc2642fc2be77bb6c91f905f6404cec1a0/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:14:15 multinode-449000 dockerd[829]: time="2023-12-12T23:14:15.314015791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:15 multinode-449000 dockerd[829]: time="2023-12-12T23:14:15.314278842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:15 multinode-449000 dockerd[829]: time="2023-12-12T23:14:15.314291171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:15 multinode-449000 dockerd[829]: time="2023-12-12T23:14:15.314298450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:17 multinode-449000 cri-dockerd[1027]: time="2023-12-12T23:14:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/66b3849798a9110a57b64253bbb603af2ba17728dc7eaf9e4f48ec5c4fa8f726/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.518746266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.519117995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.519194808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.519251920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003640689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003685990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003706098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003715914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 cri-dockerd[1027]: time="2023-12-12T23:14:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/416854ec1af27a500468dfec9544e23421e8b31d5496b11afcfe0709cb95ca3a/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354256453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354453693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354518110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354650589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	94e368aff21e4       ead0a4a53df89                                                                              8 seconds ago        Running             coredns                   1                   416854ec1af27       coredns-5dd5756b68-gbw2q
	17be0784b8346       c7d1297425461                                                                              13 seconds ago       Running             kindnet-cni               1                   66b3849798a91       kindnet-zkv5v
	e5afc68eedda9       6e38f40d628db                                                                              15 seconds ago       Running             storage-provisioner       1                   7cffdc22a3f43       storage-provisioner
	0da1678ef4c24       83f6cc407eed8                                                                              16 seconds ago       Running             kube-proxy                1                   fd8c1a2625482       kube-proxy-hxq22
	72d03f717cc24       e3db313c6dbc0                                                                              19 seconds ago       Running             kube-scheduler            1                   a1064c36cfb9f       kube-scheduler-multinode-449000
	375931cc49b62       73deb9a3f7025                                                                              19 seconds ago       Running             etcd                      1                   efaed44d77b68       etcd-multinode-449000
	641d4dcee3a2e       d058aa5ab969c                                                                              19 seconds ago       Running             kube-controller-manager   1                   f735eb419a518       kube-controller-manager-multinode-449000
	7e9188da4ac19       7fe0e6f37db33                                                                              19 seconds ago       Running             kube-apiserver            1                   a224a0a848c57       kube-apiserver-multinode-449000
	95bc5fcd783f5       ead0a4a53df89                                                                              About a minute ago   Exited              coredns                   0                   29a2e0536a84a       coredns-5dd5756b68-gbw2q
	349aceac4c902       6e38f40d628db                                                                              About a minute ago   Exited              storage-provisioner       0                   9d7e822b848fc       storage-provisioner
	58bbe956bbc01       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   About a minute ago   Exited              kindnet-cni               0                   58468ea0d3365       kindnet-zkv5v
	bc270a1f54f31       83f6cc407eed8                                                                              About a minute ago   Exited              kube-proxy                0                   8189af807d9f1       kube-proxy-hxq22
	f52a90b7997c0       e3db313c6dbc0                                                                              About a minute ago   Exited              kube-scheduler            0                   4a6892d4d8341       kube-scheduler-multinode-449000
	cbf4f71244550       73deb9a3f7025                                                                              About a minute ago   Exited              etcd                      0                   de90edd09b0ec       etcd-multinode-449000
	d57c6b9df1bf2       7fe0e6f37db33                                                                              About a minute ago   Exited              kube-apiserver            0                   e22fa4a926f7b       kube-apiserver-multinode-449000
	a65940e255b01       d058aa5ab969c                                                                              About a minute ago   Exited              kube-controller-manager   0                   e84049d10a454       kube-controller-manager-multinode-449000
	
	* 
	* ==> coredns [94e368aff21e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36626 - 20132 "HINFO IN 4050060911229301056.5380516612431628534. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011185175s
	
	* 
	* ==> coredns [95bc5fcd783f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35091 - 44462 "HINFO IN 6377447879366584547.718696205685487622. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.013431538s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-449000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-449000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-449000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T15_13_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-449000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:14:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-449000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d39a0d33c3541cc99d09ae9cba43e45
	  System UUID:                9fde11ee-0000-0000-8111-f01898ef957c
	  Boot ID:                    c17ee9e4-2b44-420e-a492-b4d2402f4d1c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gbw2q                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     74s
	  kube-system                 etcd-multinode-449000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         87s
	  kube-system                 kindnet-zkv5v                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      74s
	  kube-system                 kube-apiserver-multinode-449000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-multinode-449000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-proxy-hxq22                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-multinode-449000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     87s                kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           75s                node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	  Normal  NodeReady                64s                kubelet          Node multinode-449000 status is now: NodeReady
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.028530] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +5.014314] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.347696] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.037420] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.885062] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.012553] systemd-fstab-generator[512]: Ignoring "noauto" for root device
	[  +0.083231] systemd-fstab-generator[523]: Ignoring "noauto" for root device
	[  +0.766335] systemd-fstab-generator[739]: Ignoring "noauto" for root device
	[  +0.212137] systemd-fstab-generator[779]: Ignoring "noauto" for root device
	[  +0.088669] systemd-fstab-generator[790]: Ignoring "noauto" for root device
	[  +0.100934] systemd-fstab-generator[803]: Ignoring "noauto" for root device
	[  +1.386458] systemd-fstab-generator[972]: Ignoring "noauto" for root device
	[  +0.090727] systemd-fstab-generator[983]: Ignoring "noauto" for root device
	[  +0.100127] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +0.094869] systemd-fstab-generator[1005]: Ignoring "noauto" for root device
	[  +0.105675] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1262]: Ignoring "noauto" for root device
	[  +0.228992] kauditd_printk_skb: 69 callbacks suppressed
	
	* 
	* ==> etcd [375931cc49b6] <==
	* {"level":"info","ts":"2023-12-12T23:14:11.766019Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:14:11.766039Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:14:11.765892Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:14:11.766126Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:14:11.766132Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:14:11.766276Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:14:11.766283Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:14:11.766471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2023-12-12T23:14:11.76651Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2023-12-12T23:14:11.766567Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:11.766586Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:13.042583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:13.042696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:13.042817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:13.042901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.042952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.043079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.043129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.044064Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-449000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:13.044317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:13.045133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T23:14:13.045337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:13.0461Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:13.046181Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:13.046691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [cbf4f7124455] <==
	* {"level":"info","ts":"2023-12-12T23:13:00.594053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594813Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-449000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:13:00.596408Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:13:00.597065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:13:00.597176Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.597273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:13:00.601498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T23:13:00.601865Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:13:00.601875Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:13:00.623742Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.623931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.624046Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:17.553066Z","caller":"traceutil/trace.go:171","msg":"trace[473893054] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"111.978423ms","start":"2023-12-12T23:13:17.440922Z","end":"2023-12-12T23:13:17.5529Z","steps":["trace[473893054] 'process raft request'  (duration: 37.599328ms)","trace[473893054] 'compare'  (duration: 74.277806ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:13:17.553742Z","caller":"traceutil/trace.go:171","msg":"trace[91434881] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"111.636257ms","start":"2023-12-12T23:13:17.442093Z","end":"2023-12-12T23:13:17.553729Z","steps":["trace[91434881] 'process raft request'  (duration: 111.32696ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:13:35.183429Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T23:13:35.183471Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-449000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2023-12-12T23:13:35.183566Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:13:35.183635Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:13:35.197401Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:13:35.197445Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T23:13:35.197493Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2023-12-12T23:13:35.198654Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:13:35.198691Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:13:35.198697Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-449000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	* 
	* ==> kernel <==
	*  23:14:31 up 0 min,  0 users,  load average: 0.41, 0.11, 0.03
	Linux multinode-449000 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [17be0784b834] <==
	* I1212 23:14:17.746299       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 23:14:17.746371       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 23:14:17.746528       1 main.go:116] setting mtu 1500 for CNI 
	I1212 23:14:17.746558       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 23:14:17.746578       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 23:14:18.045367       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:14:18.045594       1 main.go:227] handling current node
	I1212 23:14:28.058397       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:14:28.058431       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [58bbe956bbc0] <==
	* I1212 23:13:23.520861       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 23:13:23.520913       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 23:13:23.521005       1 main.go:116] setting mtu 1500 for CNI 
	I1212 23:13:23.521018       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 23:13:23.521036       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 23:13:23.724964       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:13:23.725050       1 main.go:227] handling current node
	I1212 23:13:33.727761       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:13:33.727777       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7e9188da4ac1] <==
	* I1212 23:14:14.064619       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1212 23:14:14.064715       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 23:14:14.031369       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1212 23:14:14.031459       1 aggregator.go:164] waiting for initial CRD sync...
	I1212 23:14:14.031483       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1212 23:14:14.088133       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:14.125243       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1212 23:14:14.130678       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 23:14:14.130964       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 23:14:14.131415       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:14.131454       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:14.132094       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 23:14:14.132136       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 23:14:14.132384       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:14:14.132903       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:14.132934       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:14.132939       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:14.132942       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:14.132946       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:15.036762       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:16.454625       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:16.534377       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:16.550673       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:16.593258       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:16.598012       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [d57c6b9df1bf] <==
	* W1212 23:13:35.192081       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192095       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192105       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192123       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192138       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192146       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192161       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192178       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192183       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192200       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192205       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192228       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192232       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192250       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192262       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192271       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192285       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192302       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192317       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192323       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.191582       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192343       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192363       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192396       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1212 23:13:35.207051       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-controller-manager [641d4dcee3a2] <==
	* I1212 23:14:27.097050       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-449000\" does not exist"
	I1212 23:14:27.101913       1 shared_informer.go:318] Caches are synced for node
	I1212 23:14:27.101984       1 range_allocator.go:174] "Sending events to api server"
	I1212 23:14:27.102118       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1212 23:14:27.102225       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1212 23:14:27.102281       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1212 23:14:27.109462       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 23:14:27.129639       1 shared_informer.go:318] Caches are synced for TTL
	I1212 23:14:27.143921       1 shared_informer.go:318] Caches are synced for taint
	I1212 23:14:27.144393       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1212 23:14:27.144793       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1212 23:14:27.144962       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-449000"
	I1212 23:14:27.145248       1 taint_manager.go:210] "Sending events to api server"
	I1212 23:14:27.144480       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 23:14:27.146443       1 event.go:307] "Event occurred" object="multinode-449000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-449000 event: Registered Node multinode-449000 in Controller"
	I1212 23:14:27.146609       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 23:14:27.183069       1 shared_informer.go:318] Caches are synced for GC
	I1212 23:14:27.188968       1 shared_informer.go:318] Caches are synced for stateful set
	I1212 23:14:27.195909       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 23:14:27.222794       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:14:27.244316       1 shared_informer.go:318] Caches are synced for daemon sets
	I1212 23:14:27.246695       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:14:27.552889       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:14:27.553092       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:14:27.578946       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [a65940e255b0] <==
	* I1212 23:13:16.536339       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:13:16.580270       1 shared_informer.go:318] Caches are synced for deployment
	I1212 23:13:16.583892       1 shared_informer.go:318] Caches are synced for disruption
	I1212 23:13:16.584993       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:13:16.625708       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1212 23:13:16.965091       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:13:16.991675       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:13:16.991709       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:13:17.139253       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zkv5v"
	I1212 23:13:17.141698       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hxq22"
	I1212 23:13:17.333986       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 23:13:17.557309       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pk47r"
	I1212 23:13:17.557360       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 23:13:17.569686       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gbw2q"
	I1212 23:13:17.589493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="255.869106ms"
	I1212 23:13:17.604752       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pk47r"
	I1212 23:13:17.611415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.817254ms"
	I1212 23:13:17.624419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.85131ms"
	I1212 23:13:17.624716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.156µs"
	I1212 23:13:27.969254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.807µs"
	I1212 23:13:27.989675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.171µs"
	I1212 23:13:29.737447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.07µs"
	I1212 23:13:29.778766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.904788ms"
	I1212 23:13:29.778912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.956µs"
	I1212 23:13:31.438926       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [0da1678ef4c2] <==
	* I1212 23:14:15.224642       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:15.250649       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 23:14:15.308832       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:15.308873       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:15.310476       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:15.310850       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:15.311287       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:15.311316       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:15.314039       1 config.go:188] "Starting service config controller"
	I1212 23:14:15.314498       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:15.314569       1 config.go:315] "Starting node config controller"
	I1212 23:14:15.314593       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:15.316013       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:15.316057       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:15.415374       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:14:15.415435       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:15.416613       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [bc270a1f54f3] <==
	* I1212 23:13:18.012684       1 server_others.go:69] "Using iptables proxy"
	I1212 23:13:18.037892       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 23:13:18.072494       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:13:18.072509       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:13:18.074981       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:13:18.075043       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:13:18.075202       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:13:18.075209       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:13:18.076295       1 config.go:188] "Starting service config controller"
	I1212 23:13:18.076303       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:13:18.076315       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:13:18.076318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:13:18.076333       1 config.go:315] "Starting node config controller"
	I1212 23:13:18.076335       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:13:18.177081       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:13:18.177098       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:13:18.177117       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [72d03f717cc2] <==
	* I1212 23:14:12.618876       1 serving.go:348] Generated self-signed cert in-memory
	W1212 23:14:14.077358       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:14:14.077437       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:14.077459       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:14:14.077471       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:14:14.090493       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 23:14:14.090793       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:14.092618       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 23:14:14.092701       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:14:14.093030       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:14:14.092764       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:14:14.194216       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f52a90b7997c] <==
	* W1212 23:13:01.627882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:13:01.627969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:13:01.628097       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:13:01.628146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:13:01.628239       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:13:01.628286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:13:01.628384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:13:01.628478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 23:13:02.458336       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:13:02.458362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:13:02.467319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:13:02.467352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:13:02.496299       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:13:02.496382       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:13:02.572595       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:13:02.572751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 23:13:02.707713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:13:02.707895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:13:02.722617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:13:02.722657       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:13:04.511351       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:13:35.140016       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1212 23:13:35.140121       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1212 23:13:35.140233       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 23:13:35.140522       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:13:51 UTC, ends at Tue 2023-12-12 23:14:32 UTC. --
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.315028    1268 topology_manager.go:215] "Topology Admit Handler" podUID="92e2a49a-0055-4ae7-a167-fb51b4275183" podNamespace="kube-system" podName="kindnet-zkv5v"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.315192    1268 topology_manager.go:215] "Topology Admit Handler" podUID="d330b0b4-7d3f-4386-a72d-cb235945c494" podNamespace="kube-system" podName="kube-proxy-hxq22"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.315283    1268 topology_manager.go:215] "Topology Admit Handler" podUID="11d647a8-b7f7-411a-b861-f3d109085770" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.317796    1268 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-gbw2q" podUID="09d20e99-6d1a-46d5-858f-71585ab9e532"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.319246    1268 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.373810    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/11d647a8-b7f7-411a-b861-f3d109085770-tmp\") pod \"storage-provisioner\" (UID: \"11d647a8-b7f7-411a-b861-f3d109085770\") " pod="kube-system/storage-provisioner"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.373950    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d330b0b4-7d3f-4386-a72d-cb235945c494-xtables-lock\") pod \"kube-proxy-hxq22\" (UID: \"d330b0b4-7d3f-4386-a72d-cb235945c494\") " pod="kube-system/kube-proxy-hxq22"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.374024    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-xtables-lock\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.374071    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-lib-modules\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.374160    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d330b0b4-7d3f-4386-a72d-cb235945c494-lib-modules\") pod \"kube-proxy-hxq22\" (UID: \"d330b0b4-7d3f-4386-a72d-cb235945c494\") " pod="kube-system/kube-proxy-hxq22"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.374215    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-cni-cfg\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.374634    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.374789    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:14.874754555 +0000 UTC m=+4.704404311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.877665    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.877772    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:15.877760314 +0000 UTC m=+5.707410069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.890079    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8c1a2625482be1dd7888a747109baf826ed6eb5c387c599b9d708506c7a49c"
	Dec 12 23:14:15 multinode-449000 kubelet[1268]: E1212 23:14:15.401265    1268 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Dec 12 23:14:15 multinode-449000 kubelet[1268]: E1212 23:14:15.885579    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:15 multinode-449000 kubelet[1268]: E1212 23:14:15.885630    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:17.885619912 +0000 UTC m=+7.715269668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: I1212 23:14:17.468943    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b3849798a9110a57b64253bbb603af2ba17728dc7eaf9e4f48ec5c4fa8f726"
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: I1212 23:14:17.477215    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cffdc22a3f43f092b053882267f41dc2642fc2be77bb6c91f905f6404cec1a0"
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: E1212 23:14:17.477562    1268 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-gbw2q" podUID="09d20e99-6d1a-46d5-858f-71585ab9e532"
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: E1212 23:14:17.900843    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: E1212 23:14:17.900894    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:21.90088354 +0000 UTC m=+11.730533296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:19 multinode-449000 kubelet[1268]: E1212 23:14:19.349665    1268 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-gbw2q" podUID="09d20e99-6d1a-46d5-858f-71585ab9e532"
	
	* 
	* ==> storage-provisioner [349aceac4c90] <==
	* I1212 23:13:28.776618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:13:28.782292       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:13:28.782347       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:13:28.787077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:13:28.787616       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-449000_bf43f63a-cdfb-4d50-832d-d0ae8d0a0d1a!
	I1212 23:13:28.789693       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3abdb08b-1824-4529-8878-e42e5ba065dd", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-449000_bf43f63a-cdfb-4d50-832d-d0ae8d0a0d1a became leader
	I1212 23:13:28.888957       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-449000_bf43f63a-cdfb-4d50-832d-d0ae8d0a0d1a!
	
	* 
	* ==> storage-provisioner [e5afc68eedda] <==
	* I1212 23:14:15.419211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-449000 -n multinode-449000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-449000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (50.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (86.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-449000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000-m01 --driver=hyperkit 
multinode_test.go:480: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-449000-m01 --driver=hyperkit : (38.084288173s)
multinode_test.go:482: expected start profile command to fail. args "out/minikube-darwin-amd64 start -p multinode-449000-m01 --driver=hyperkit "
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000-m02 --driver=hyperkit 
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-449000-m02 --driver=hyperkit : (39.260752165s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-449000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-449000: exit status 80 (277.909849ms)

                                                
                                                
-- stdout --
	* Adding node m02 to cluster multinode-449000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-449000-m02 already exists in multinode-449000-m02 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-449000-m02
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-449000-m02: (5.355385926s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 logs -n 25: (2.991330122s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:10 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:11 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- exec          | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- nslookup kubernetes.io            |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- exec          | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- nslookup kubernetes.default       |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000                  | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | -- exec  -- nslookup                 |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                      |         |         |                     |                     |
	| kubectl | -p multinode-449000 -- get pods -o   | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |         |         |                     |                     |
	| node    | add -p multinode-449000 -v 3         | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	| node    | multinode-449000 node stop m03       | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	| node    | multinode-449000 node start          | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	|         | m03 --alsologtostderr                |                      |         |         |                     |                     |
	| node    | list -p multinode-449000             | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST |                     |
	| stop    | -p multinode-449000                  | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST | 12 Dec 23 15:12 PST |
	| start   | -p multinode-449000                  | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:12 PST | 12 Dec 23 15:13 PST |
	|         | --wait=true -v=8                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	| node    | list -p multinode-449000             | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:13 PST |                     |
	| node    | multinode-449000 node delete         | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:13 PST |                     |
	|         | m03                                  |                      |         |         |                     |                     |
	| stop    | multinode-449000 stop                | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:13 PST | 12 Dec 23 15:13 PST |
	| start   | -p multinode-449000                  | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:13 PST | 12 Dec 23 15:14 PST |
	|         | --wait=true -v=8                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| node    | list -p multinode-449000             | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:14 PST |                     |
	| start   | -p multinode-449000-m01              | multinode-449000-m01 | jenkins | v1.32.0 | 12 Dec 23 15:14 PST | 12 Dec 23 15:15 PST |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| start   | -p multinode-449000-m02              | multinode-449000-m02 | jenkins | v1.32.0 | 12 Dec 23 15:15 PST | 12 Dec 23 15:15 PST |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| node    | add -p multinode-449000              | multinode-449000     | jenkins | v1.32.0 | 12 Dec 23 15:15 PST |                     |
	| delete  | -p multinode-449000-m02              | multinode-449000-m02 | jenkins | v1.32.0 | 12 Dec 23 15:15 PST | 12 Dec 23 15:15 PST |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 15:15:11
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 15:15:11.650703    3859 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:15:11.651138    3859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:15:11.651141    3859 out.go:309] Setting ErrFile to fd 2...
	I1212 15:15:11.651144    3859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:15:11.651322    3859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:15:11.652755    3859 out.go:303] Setting JSON to false
	I1212 15:15:11.675500    3859 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2682,"bootTime":1702420229,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 15:15:11.675607    3859 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:15:11.696737    3859 out.go:177] * [multinode-449000-m02] minikube v1.32.0 on Darwin 14.2
	I1212 15:15:11.738620    3859 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 15:15:11.738668    3859 notify.go:220] Checking for updates...
	I1212 15:15:11.780484    3859 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:15:11.801574    3859 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:15:11.822550    3859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:15:11.843474    3859 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:15:11.864608    3859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:15:11.886286    3859 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:15:11.886436    3859 config.go:182] Loaded profile config "multinode-449000-m01": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:15:11.886585    3859 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:15:11.916499    3859 out.go:177] * Using the hyperkit driver based on user configuration
	I1212 15:15:11.937586    3859 start.go:298] selected driver: hyperkit
	I1212 15:15:11.937601    3859 start.go:902] validating driver "hyperkit" against <nil>
	I1212 15:15:11.937616    3859 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:15:11.937818    3859 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:15:11.938000    3859 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 15:15:11.947270    3859 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 15:15:11.951107    3859 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:15:11.951127    3859 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 15:15:11.951159    3859 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 15:15:11.953891    3859 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1212 15:15:11.954026    3859 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 15:15:11.954082    3859 cni.go:84] Creating CNI manager for ""
	I1212 15:15:11.954095    3859 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 15:15:11.954104    3859 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 15:15:11.954114    3859 start_flags.go:323] config:
	{Name:multinode-449000-m02 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:15:11.954255    3859 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:15:11.996460    3859 out.go:177] * Starting control plane node multinode-449000-m02 in cluster multinode-449000-m02
	I1212 15:15:12.017532    3859 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:15:12.017578    3859 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:15:12.017597    3859 cache.go:56] Caching tarball of preloaded images
	I1212 15:15:12.017772    3859 preload.go:174] Found /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:15:12.017784    3859 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:15:12.017908    3859 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/config.json ...
	I1212 15:15:12.017942    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/config.json: {Name:mk9f6001b68a8113175418a90370323888aa4a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:12.018585    3859 start.go:365] acquiring machines lock for multinode-449000-m02: {Name:mk51496c390b032727acf9b9a5f67e389f19ec26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 15:15:12.018691    3859 start.go:369] acquired machines lock for "multinode-449000-m02" in 85.616µs
	I1212 15:15:12.018729    3859 start.go:93] Provisioning new machine with config: &{Name:multinode-449000-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-449000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:15:12.018797    3859 start.go:125] createHost starting for "" (driver="hyperkit")
	I1212 15:15:12.039362    3859 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I1212 15:15:12.039666    3859 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:15:12.039713    3859 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:15:12.048122    3859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51475
	I1212 15:15:12.048471    3859 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:15:12.048880    3859 main.go:141] libmachine: Using API Version  1
	I1212 15:15:12.048887    3859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:15:12.049118    3859 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:15:12.049220    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I1212 15:15:12.049296    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:12.049390    3859 start.go:159] libmachine.API.Create for "multinode-449000-m02" (driver="hyperkit")
	I1212 15:15:12.049413    3859 client.go:168] LocalClient.Create starting
	I1212 15:15:12.049447    3859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem
	I1212 15:15:12.049479    3859 main.go:141] libmachine: Decoding PEM data...
	I1212 15:15:12.049494    3859 main.go:141] libmachine: Parsing certificate...
	I1212 15:15:12.049552    3859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem
	I1212 15:15:12.049575    3859 main.go:141] libmachine: Decoding PEM data...
	I1212 15:15:12.049584    3859 main.go:141] libmachine: Parsing certificate...
	I1212 15:15:12.049601    3859 main.go:141] libmachine: Running pre-create checks...
	I1212 15:15:12.049606    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .PreCreateCheck
	I1212 15:15:12.049686    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:12.049833    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetConfigRaw
	I1212 15:15:12.060760    3859 main.go:141] libmachine: Creating machine...
	I1212 15:15:12.060768    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .Create
	I1212 15:15:12.060883    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:12.061089    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | I1212 15:15:12.060871    3867 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:15:12.061157    3859 main.go:141] libmachine: (multinode-449000-m02) Downloading /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17777-1259/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 15:15:12.284493    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | I1212 15:15:12.284433    3867 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/id_rsa...
	I1212 15:15:12.483603    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | I1212 15:15:12.483548    3867 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/multinode-449000-m02.rawdisk...
	I1212 15:15:12.483616    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Writing magic tar header
	I1212 15:15:12.483633    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Writing SSH key tar header
	I1212 15:15:12.484396    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | I1212 15:15:12.484304    3867 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02 ...
	I1212 15:15:12.909283    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:12.909302    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/hyperkit.pid
	I1212 15:15:12.909323    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Using UUID 4ccd0248-9944-11ee-be79-f01898ef957c
	I1212 15:15:12.935125    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Generated MAC ea:8:9b:fa:1f:1b
	I1212 15:15:12.935138    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000-m02
	I1212 15:15:12.935179    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4ccd0248-9944-11ee-be79-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000206690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I1212 15:15:12.935222    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4ccd0248-9944-11ee-be79-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000206690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I1212 15:15:12.935265    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4ccd0248-9944-11ee-be79-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/multinode-449000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/bzimage,/Users/j
enkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000-m02"}
	I1212 15:15:12.935300    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4ccd0248-9944-11ee-be79-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/multinode-449000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/bzimage,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/mult
inode-449000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000-m02"
	I1212 15:15:12.935306    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 15:15:12.938101    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 DEBUG: hyperkit: Pid is 3868
	I1212 15:15:12.938522    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 0
	I1212 15:15:12.938530    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:12.938620    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:12.939601    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for ea:8:9b:fa:1f:1b in /var/db/dhcpd_leases ...
	I1212 15:15:12.939682    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 15:15:12.939696    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:15:12.939715    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:15:12.939732    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:15:12.939738    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:15:12.939744    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:15:12.939749    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:15:12.939756    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:15:12.939762    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:15:12.939767    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:15:12.939777    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:15:12.939783    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:15:12.939791    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:15:12.939800    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:15:12.945739    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 15:15:12.955989    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 15:15:12.956796    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:15:12.956809    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:15:12.956816    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:15:12.956821    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:15:13.526559    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 15:15:13.526568    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 15:15:13.631539    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:15:13.631550    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:15:13.631558    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:15:13.631568    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:15:13.632447    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 15:15:13.632456    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 15:15:14.941457    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 1
	I1212 15:15:14.941471    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:14.941570    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:14.942365    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for ea:8:9b:fa:1f:1b in /var/db/dhcpd_leases ...
	I1212 15:15:14.942414    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 15:15:14.942436    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:15:14.942450    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:15:14.942472    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:15:14.942485    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:15:14.942491    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:15:14.942498    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:15:14.942504    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:15:14.942515    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:15:14.942521    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:15:14.942526    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:15:14.942536    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:15:14.942543    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:15:14.942551    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:15:16.943102    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 2
	I1212 15:15:16.943112    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:16.943185    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:16.944005    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for ea:8:9b:fa:1f:1b in /var/db/dhcpd_leases ...
	I1212 15:15:16.944051    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 15:15:16.944059    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:15:16.944068    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:15:16.944073    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:15:16.944079    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:15:16.944096    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:15:16.944109    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:15:16.944117    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:15:16.944123    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:15:16.944134    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:15:16.944142    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:15:16.944149    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:15:16.944155    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:15:16.944162    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:15:18.659179    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1212 15:15:18.659283    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1212 15:15:18.659293    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | 2023/12/12 15:15:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1212 15:15:18.944507    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 3
	I1212 15:15:18.944522    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:18.944605    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:18.945679    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for ea:8:9b:fa:1f:1b in /var/db/dhcpd_leases ...
	I1212 15:15:18.945698    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 15:15:18.945721    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:15:18.945733    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:15:18.945741    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:15:18.945749    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:15:18.945756    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:15:18.945764    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:15:18.945772    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:15:18.945779    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:15:18.945789    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:15:18.945799    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:15:18.945813    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:15:18.945823    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:15:18.945833    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:15:20.946519    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 4
	I1212 15:15:20.946531    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:20.946605    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:20.947424    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for ea:8:9b:fa:1f:1b in /var/db/dhcpd_leases ...
	I1212 15:15:20.947470    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 15:15:20.947486    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:15:20.947497    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:15:20.947504    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:15:20.947514    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:15:20.947521    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:15:20.947527    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:15:20.947532    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:15:20.947540    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:15:20.947547    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:15:20.947552    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:15:20.947557    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:15:20.947566    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:15:20.947572    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:15:22.947838    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 5
	I1212 15:15:22.947848    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:22.947924    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:22.948747    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for ea:8:9b:fa:1f:1b in /var/db/dhcpd_leases ...
	I1212 15:15:22.948821    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I1212 15:15:22.948832    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ea:8:9b:fa:1f:1b ID:1,ea:8:9b:fa:1f:1b Lease:0x657a3b0a}
	I1212 15:15:22.948839    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Found match: ea:8:9b:fa:1f:1b
	I1212 15:15:22.948843    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | IP: 192.169.0.15
	I1212 15:15:22.949060    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetConfigRaw
	I1212 15:15:22.949632    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:22.949734    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:22.949829    3859 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 15:15:22.949835    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I1212 15:15:22.949941    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:22.949994    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:22.950782    3859 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 15:15:22.950792    3859 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 15:15:22.950796    3859 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 15:15:22.950802    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:22.950893    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:22.950982    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:22.951068    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:22.951158    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:22.951270    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:22.951582    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:22.951586    3859 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 15:15:22.978337    3859 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1212 15:15:26.054027    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:15:26.054036    3859 main.go:141] libmachine: Detecting the provisioner...
	I1212 15:15:26.054040    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.054163    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:26.054241    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.054323    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.054412    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:26.054532    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:26.054797    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:26.054802    3859 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 15:15:26.128858    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 15:15:26.128913    3859 main.go:141] libmachine: found compatible host: buildroot
	I1212 15:15:26.128917    3859 main.go:141] libmachine: Provisioning with buildroot...
	I1212 15:15:26.128922    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I1212 15:15:26.129060    3859 buildroot.go:166] provisioning hostname "multinode-449000-m02"
	I1212 15:15:26.129068    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I1212 15:15:26.129154    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.129236    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:26.129323    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.129387    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.129483    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:26.129621    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:26.129864    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:26.129870    3859 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000-m02 && echo "multinode-449000-m02" | sudo tee /etc/hostname
	I1212 15:15:26.212603    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000-m02
	
	I1212 15:15:26.212618    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.212744    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:26.212832    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.212919    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.212997    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:26.213122    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:26.213372    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:26.213382    3859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 15:15:26.291637    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:15:26.291651    3859 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17777-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17777-1259/.minikube}
	I1212 15:15:26.291663    3859 buildroot.go:174] setting up certificates
	I1212 15:15:26.291674    3859 provision.go:83] configureAuth start
	I1212 15:15:26.291679    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I1212 15:15:26.291816    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I1212 15:15:26.291921    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.292003    3859 provision.go:138] copyHostCerts
	I1212 15:15:26.292074    3859 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem, removing ...
	I1212 15:15:26.292081    3859 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:15:26.292207    3859 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem (1675 bytes)
	I1212 15:15:26.292437    3859 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem, removing ...
	I1212 15:15:26.292440    3859 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:15:26.292517    3859 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem (1082 bytes)
	I1212 15:15:26.292682    3859 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem, removing ...
	I1212 15:15:26.292685    3859 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:15:26.292757    3859 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem (1123 bytes)
	I1212 15:15:26.292901    3859 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem org=jenkins.multinode-449000-m02 san=[192.169.0.15 192.169.0.15 localhost 127.0.0.1 minikube multinode-449000-m02]
	I1212 15:15:26.551285    3859 provision.go:172] copyRemoteCerts
	I1212 15:15:26.551345    3859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 15:15:26.551361    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.551507    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:26.551594    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.551677    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:26.551756    3859 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I1212 15:15:26.594545    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 15:15:26.610574    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 15:15:26.626358    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 15:15:26.642026    3859 provision.go:86] duration metric: configureAuth took 350.343073ms
	I1212 15:15:26.642035    3859 buildroot.go:189] setting minikube options for container-runtime
	I1212 15:15:26.642155    3859 config.go:182] Loaded profile config "multinode-449000-m02": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:15:26.642165    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:26.642296    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.642387    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:26.642480    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.642560    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.642625    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:26.642716    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:26.642953    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:26.642958    3859 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 15:15:26.717795    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 15:15:26.717803    3859 buildroot.go:70] root file system type: tmpfs
	I1212 15:15:26.717883    3859 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 15:15:26.717895    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.718022    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:26.718099    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.718184    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.718276    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:26.718404    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:26.718648    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:26.718688    3859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 15:15:26.801193    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 15:15:26.801210    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:26.801334    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:26.801411    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.801498    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:26.801576    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:26.801715    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:26.801973    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:26.801982    3859 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 15:15:27.389210    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 15:15:27.389221    3859 main.go:141] libmachine: Checking connection to Docker...
	I1212 15:15:27.389226    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetURL
	I1212 15:15:27.389409    3859 main.go:141] libmachine: Docker is up and running!
	I1212 15:15:27.389414    3859 main.go:141] libmachine: Reticulating splines...
	I1212 15:15:27.389424    3859 client.go:171] LocalClient.Create took 15.340106739s
	I1212 15:15:27.389435    3859 start.go:167] duration metric: libmachine.API.Create for "multinode-449000-m02" took 15.340151373s
	I1212 15:15:27.389441    3859 start.go:300] post-start starting for "multinode-449000-m02" (driver="hyperkit")
	I1212 15:15:27.389448    3859 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 15:15:27.389455    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:27.389603    3859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 15:15:27.389615    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:27.389709    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:27.389795    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:27.389887    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:27.389969    3859 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I1212 15:15:27.431673    3859 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 15:15:27.434342    3859 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 15:15:27.434353    3859 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/addons for local assets ...
	I1212 15:15:27.434445    3859 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/files for local assets ...
	I1212 15:15:27.434610    3859 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> 17202.pem in /etc/ssl/certs
	I1212 15:15:27.434815    3859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 15:15:27.441307    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:15:27.457712    3859 start.go:303] post-start completed in 68.26507ms
	I1212 15:15:27.457736    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetConfigRaw
	I1212 15:15:27.458369    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I1212 15:15:27.458511    3859 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/config.json ...
	I1212 15:15:27.458874    3859 start.go:128] duration metric: createHost completed in 15.440172905s
	I1212 15:15:27.458887    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:27.458989    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:27.459068    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:27.459155    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:27.459232    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:27.459334    3859 main.go:141] libmachine: Using SSH client type: native
	I1212 15:15:27.459569    3859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 15:15:27.459573    3859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 15:15:27.532412    3859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422927.613746835
	
	I1212 15:15:27.532419    3859 fix.go:206] guest clock: 1702422927.613746835
	I1212 15:15:27.532422    3859 fix.go:219] Guest: 2023-12-12 15:15:27.613746835 -0800 PST Remote: 2023-12-12 15:15:27.45888 -0800 PST m=+15.852672873 (delta=154.866835ms)
	I1212 15:15:27.532439    3859 fix.go:190] guest clock delta is within tolerance: 154.866835ms
	I1212 15:15:27.532442    3859 start.go:83] releasing machines lock for "multinode-449000-m02", held for 15.513851393s
	I1212 15:15:27.532459    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:27.532607    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I1212 15:15:27.532694    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:27.532980    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:27.533061    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:27.533149    3859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 15:15:27.533183    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:27.533195    3859 ssh_runner.go:195] Run: cat /version.json
	I1212 15:15:27.533202    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:27.533286    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:27.533299    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:27.533378    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:27.533391    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:27.533466    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:27.533484    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:27.533531    3859 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I1212 15:15:27.533548    3859 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I1212 15:15:27.575529    3859 ssh_runner.go:195] Run: systemctl --version
	I1212 15:15:27.624692    3859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 15:15:27.628339    3859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 15:15:27.628387    3859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 15:15:27.637827    3859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 15:15:27.637839    3859 start.go:475] detecting cgroup driver to use...
	I1212 15:15:27.637958    3859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:15:27.652239    3859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 15:15:27.659299    3859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 15:15:27.665848    3859 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 15:15:27.665889    3859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 15:15:27.672469    3859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:15:27.679172    3859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 15:15:27.685627    3859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:15:27.692086    3859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 15:15:27.698778    3859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 15:15:27.705488    3859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 15:15:27.711395    3859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 15:15:27.717163    3859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:15:27.804364    3859 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 15:15:27.816729    3859 start.go:475] detecting cgroup driver to use...
	I1212 15:15:27.816807    3859 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 15:15:27.828846    3859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:15:27.843619    3859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 15:15:27.859977    3859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:15:27.868970    3859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:15:27.877462    3859 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 15:15:27.899708    3859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:15:27.909317    3859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:15:27.921429    3859 ssh_runner.go:195] Run: which cri-dockerd
	I1212 15:15:27.923785    3859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 15:15:27.930186    3859 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 15:15:27.941148    3859 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 15:15:28.040506    3859 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 15:15:28.139937    3859 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 15:15:28.140005    3859 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 15:15:28.151517    3859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:15:28.241148    3859 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 15:15:29.588092    3859 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.34693927s)
	I1212 15:15:29.588151    3859 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:15:29.686268    3859 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 15:15:29.770179    3859 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:15:29.866859    3859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:15:29.956504    3859 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 15:15:29.972711    3859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:15:30.066918    3859 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 15:15:30.120779    3859 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 15:15:30.120860    3859 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 15:15:30.125651    3859 start.go:543] Will wait 60s for crictl version
	I1212 15:15:30.125717    3859 ssh_runner.go:195] Run: which crictl
	I1212 15:15:30.129149    3859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 15:15:30.163424    3859 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 15:15:30.163489    3859 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 15:15:30.181365    3859 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 15:15:30.262890    3859 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 15:15:30.262920    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I1212 15:15:30.263161    3859 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1212 15:15:30.265877    3859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 15:15:30.274578    3859 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:15:30.274648    3859 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 15:15:30.287718    3859 docker.go:671] Got preloaded images: 
	I1212 15:15:30.287726    3859 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 15:15:30.287782    3859 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 15:15:30.294415    3859 ssh_runner.go:195] Run: which lz4
	I1212 15:15:30.296913    3859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 15:15:30.299480    3859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 15:15:30.299495    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 15:15:31.861000    3859 docker.go:635] Took 1.564139 seconds to copy over tarball
	I1212 15:15:31.861062    3859 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 15:15:35.836006    3859 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.974955739s)
	I1212 15:15:35.836015    3859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 15:15:35.866417    3859 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 15:15:35.873216    3859 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 15:15:35.884506    3859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:15:35.980431    3859 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 15:15:37.898592    3859 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.918158585s)
	I1212 15:15:37.898684    3859 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 15:15:37.912562    3859 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 15:15:37.912576    3859 cache_images.go:84] Images are preloaded, skipping loading
	I1212 15:15:37.912649    3859 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 15:15:37.931228    3859 cni.go:84] Creating CNI manager for ""
	I1212 15:15:37.931238    3859 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 15:15:37.931247    3859 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 15:15:37.931262    3859 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.15 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-449000-m02 NodeName:multinode-449000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 15:15:37.931348    3859 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-449000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 15:15:37.931400    3859 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-449000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-449000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 15:15:37.931455    3859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 15:15:37.937373    3859 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 15:15:37.937422    3859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 15:15:37.943795    3859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1212 15:15:37.955375    3859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 15:15:37.966525    3859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1212 15:15:37.977600    3859 ssh_runner.go:195] Run: grep 192.169.0.15	control-plane.minikube.internal$ /etc/hosts
	I1212 15:15:37.980008    3859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 15:15:37.988477    3859 certs.go:56] Setting up /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02 for IP: 192.169.0.15
	I1212 15:15:37.988493    3859 certs.go:190] acquiring lock for shared ca certs: {Name:mkc116deb15cbfbe8939fd5907655f41e3f69c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:37.988638    3859 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key
	I1212 15:15:37.988684    3859 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key
	I1212 15:15:37.988731    3859 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/client.key
	I1212 15:15:37.988741    3859 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/client.crt with IP's: []
	I1212 15:15:38.329019    3859 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/client.crt ...
	I1212 15:15:38.329031    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/client.crt: {Name:mk84da81da65fdca0ad455a9177c02420e81be37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:38.329321    3859 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/client.key ...
	I1212 15:15:38.329326    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/client.key: {Name:mk93004c7c9c17fc5f0e6c686fcd2b60b2ac04de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:38.329536    3859 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.key.66702ba3
	I1212 15:15:38.329553    3859 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.crt.66702ba3 with IP's: [192.169.0.15 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 15:15:38.382735    3859 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.crt.66702ba3 ...
	I1212 15:15:38.382744    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.crt.66702ba3: {Name:mk70b2154feb837010983a67f60061b1905570b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:38.383053    3859 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.key.66702ba3 ...
	I1212 15:15:38.383059    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.key.66702ba3: {Name:mk03fc6acad6b388ab24aadb05b75c0b33e0d135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:38.383309    3859 certs.go:337] copying /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.crt.66702ba3 -> /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.crt
	I1212 15:15:38.383492    3859 certs.go:341] copying /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.key.66702ba3 -> /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.key
	I1212 15:15:38.383702    3859 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.key
	I1212 15:15:38.383713    3859 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.crt with IP's: []
	I1212 15:15:38.431650    3859 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.crt ...
	I1212 15:15:38.431658    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.crt: {Name:mk46a61061ee9f522a4b1fcc7765f61950c9df9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:38.431964    3859 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.key ...
	I1212 15:15:38.431970    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.key: {Name:mk3b5d8e6ecc597d0d322be898f16373e4fce5f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:38.432357    3859 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem (1338 bytes)
	W1212 15:15:38.432397    3859 certs.go:433] ignoring /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720_empty.pem, impossibly tiny 0 bytes
	I1212 15:15:38.432404    3859 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 15:15:38.432452    3859 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem (1082 bytes)
	I1212 15:15:38.432483    3859 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem (1123 bytes)
	I1212 15:15:38.432512    3859 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem (1675 bytes)
	I1212 15:15:38.432580    3859 certs.go:437] found cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:15:38.433067    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 15:15:38.449880    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 15:15:38.466262    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 15:15:38.483092    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m02/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 15:15:38.500375    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 15:15:38.516749    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 15:15:38.533203    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 15:15:38.549857    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 15:15:38.566059    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/1720.pem --> /usr/share/ca-certificates/1720.pem (1338 bytes)
	I1212 15:15:38.582365    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /usr/share/ca-certificates/17202.pem (1708 bytes)
	I1212 15:15:38.599551    3859 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 15:15:38.615757    3859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 15:15:38.627090    3859 ssh_runner.go:195] Run: openssl version
	I1212 15:15:38.630603    3859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1720.pem && ln -fs /usr/share/ca-certificates/1720.pem /etc/ssl/certs/1720.pem"
	I1212 15:15:38.637094    3859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1720.pem
	I1212 15:15:38.640001    3859 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:59 /usr/share/ca-certificates/1720.pem
	I1212 15:15:38.640044    3859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1720.pem
	I1212 15:15:38.643663    3859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1720.pem /etc/ssl/certs/51391683.0"
	I1212 15:15:38.650157    3859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17202.pem && ln -fs /usr/share/ca-certificates/17202.pem /etc/ssl/certs/17202.pem"
	I1212 15:15:38.656857    3859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17202.pem
	I1212 15:15:38.659970    3859 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:59 /usr/share/ca-certificates/17202.pem
	I1212 15:15:38.660015    3859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17202.pem
	I1212 15:15:38.663655    3859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17202.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 15:15:38.670169    3859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 15:15:38.676528    3859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:15:38.679496    3859 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:15:38.679533    3859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 15:15:38.682989    3859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 15:15:38.689432    3859 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 15:15:38.691952    3859 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 15:15:38.691990    3859 kubeadm.go:404] StartCluster: {Name:multinode-449000-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:multinode-449000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:15:38.692071    3859 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 15:15:38.705556    3859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 15:15:38.711462    3859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 15:15:38.717316    3859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 15:15:38.723053    3859 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 15:15:38.723074    3859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 15:15:38.865122    3859 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 15:15:49.038685    3859 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 15:15:49.038731    3859 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 15:15:49.038784    3859 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 15:15:49.038861    3859 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 15:15:49.038932    3859 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 15:15:49.038996    3859 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 15:15:49.065390    3859 out.go:204]   - Generating certificates and keys ...
	I1212 15:15:49.065464    3859 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 15:15:49.065512    3859 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 15:15:49.065570    3859 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 15:15:49.065609    3859 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 15:15:49.065649    3859 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 15:15:49.065702    3859 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 15:15:49.065751    3859 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 15:15:49.065851    3859 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-449000-m02] and IPs [192.169.0.15 127.0.0.1 ::1]
	I1212 15:15:49.065896    3859 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 15:15:49.065992    3859 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-449000-m02] and IPs [192.169.0.15 127.0.0.1 ::1]
	I1212 15:15:49.066048    3859 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 15:15:49.066108    3859 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 15:15:49.066142    3859 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 15:15:49.066188    3859 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 15:15:49.066229    3859 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 15:15:49.066270    3859 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 15:15:49.066315    3859 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 15:15:49.066360    3859 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 15:15:49.066421    3859 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 15:15:49.066476    3859 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 15:15:49.122203    3859 out.go:204]   - Booting up control plane ...
	I1212 15:15:49.122282    3859 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 15:15:49.122339    3859 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 15:15:49.122402    3859 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 15:15:49.122492    3859 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 15:15:49.122565    3859 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 15:15:49.122599    3859 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 15:15:49.122718    3859 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 15:15:49.122769    3859 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502502 seconds
	I1212 15:15:49.122840    3859 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 15:15:49.122930    3859 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 15:15:49.122976    3859 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 15:15:49.123137    3859 kubeadm.go:322] [mark-control-plane] Marking the node multinode-449000-m02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 15:15:49.123186    3859 kubeadm.go:322] [bootstrap-token] Using token: w5z4b7.366lfextgmey8f40
	I1212 15:15:49.185109    3859 out.go:204]   - Configuring RBAC rules ...
	I1212 15:15:49.185262    3859 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 15:15:49.185395    3859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 15:15:49.185624    3859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 15:15:49.185814    3859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 15:15:49.186013    3859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 15:15:49.186173    3859 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 15:15:49.186348    3859 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 15:15:49.186440    3859 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 15:15:49.186508    3859 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 15:15:49.186512    3859 kubeadm.go:322] 
	I1212 15:15:49.186614    3859 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 15:15:49.186627    3859 kubeadm.go:322] 
	I1212 15:15:49.186757    3859 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 15:15:49.186767    3859 kubeadm.go:322] 
	I1212 15:15:49.186808    3859 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 15:15:49.186907    3859 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 15:15:49.186955    3859 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 15:15:49.186958    3859 kubeadm.go:322] 
	I1212 15:15:49.187022    3859 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 15:15:49.187028    3859 kubeadm.go:322] 
	I1212 15:15:49.187116    3859 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 15:15:49.187124    3859 kubeadm.go:322] 
	I1212 15:15:49.187175    3859 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 15:15:49.187263    3859 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 15:15:49.187348    3859 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 15:15:49.187363    3859 kubeadm.go:322] 
	I1212 15:15:49.187450    3859 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 15:15:49.187536    3859 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 15:15:49.187539    3859 kubeadm.go:322] 
	I1212 15:15:49.187620    3859 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token w5z4b7.366lfextgmey8f40 \
	I1212 15:15:49.187739    3859 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25d491fbe418ba59008b56e4443168fda1f3db5a6027e11eedddf6ca431378b5 \
	I1212 15:15:49.187763    3859 kubeadm.go:322] 	--control-plane 
	I1212 15:15:49.187767    3859 kubeadm.go:322] 
	I1212 15:15:49.187863    3859 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 15:15:49.187870    3859 kubeadm.go:322] 
	I1212 15:15:49.187955    3859 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token w5z4b7.366lfextgmey8f40 \
	I1212 15:15:49.188076    3859 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25d491fbe418ba59008b56e4443168fda1f3db5a6027e11eedddf6ca431378b5 
	I1212 15:15:49.188087    3859 cni.go:84] Creating CNI manager for ""
	I1212 15:15:49.188103    3859 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 15:15:49.195143    3859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 15:15:49.239123    3859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 15:15:49.249592    3859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 15:15:49.275389    3859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 15:15:49.275444    3859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:15:49.275445    3859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=multinode-449000-m02 minikube.k8s.io/updated_at=2023_12_12T15_15_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 15:15:49.313548    3859 ops.go:34] apiserver oom_adj: -16
	I1212 15:15:49.382680    3859 kubeadm.go:1088] duration metric: took 107.284516ms to wait for elevateKubeSystemPrivileges.
	I1212 15:15:49.382784    3859 kubeadm.go:406] StartCluster complete in 10.690869704s
	I1212 15:15:49.382797    3859 settings.go:142] acquiring lock: {Name:mka464ae20beabe0956367b7c096b2df64ddda96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:49.382866    3859 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:15:49.383750    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/kubeconfig: {Name:mk59d3fcca7c93e43d82a40f16bbb777946cd182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:15:49.383992    3859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 15:15:49.384050    3859 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 15:15:49.384083    3859 addons.go:69] Setting storage-provisioner=true in profile "multinode-449000-m02"
	I1212 15:15:49.384094    3859 addons.go:231] Setting addon storage-provisioner=true in "multinode-449000-m02"
	I1212 15:15:49.384097    3859 addons.go:69] Setting default-storageclass=true in profile "multinode-449000-m02"
	I1212 15:15:49.384107    3859 config.go:182] Loaded profile config "multinode-449000-m02": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:15:49.384117    3859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-449000-m02"
	I1212 15:15:49.384139    3859 host.go:66] Checking if "multinode-449000-m02" exists ...
	I1212 15:15:49.384392    3859 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:15:49.384409    3859 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:15:49.384417    3859 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:15:49.384436    3859 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:15:49.393547    3859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51500
	I1212 15:15:49.393933    3859 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:15:49.394313    3859 main.go:141] libmachine: Using API Version  1
	I1212 15:15:49.394325    3859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:15:49.394526    3859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51502
	I1212 15:15:49.394574    3859 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:15:49.394858    3859 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:15:49.395018    3859 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:15:49.395041    3859 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:15:49.395196    3859 main.go:141] libmachine: Using API Version  1
	I1212 15:15:49.395237    3859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:15:49.395891    3859 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:15:49.396291    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I1212 15:15:49.396436    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:49.396523    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:49.398854    3859 addons.go:231] Setting addon default-storageclass=true in "multinode-449000-m02"
	I1212 15:15:49.398914    3859 host.go:66] Checking if "multinode-449000-m02" exists ...
	I1212 15:15:49.399185    3859 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:15:49.399215    3859 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:15:49.404882    3859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51504
	I1212 15:15:49.405290    3859 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:15:49.405755    3859 main.go:141] libmachine: Using API Version  1
	I1212 15:15:49.405767    3859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:15:49.406091    3859 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:15:49.406224    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I1212 15:15:49.406340    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:49.406404    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:49.407422    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:49.448385    3859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 15:15:49.408749    3859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51506
	I1212 15:15:49.448837    3859 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:15:49.485103    3859 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 15:15:49.485110    3859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 15:15:49.485123    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:49.485265    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:49.485427    3859 main.go:141] libmachine: Using API Version  1
	I1212 15:15:49.485437    3859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:15:49.485441    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:49.485549    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:49.485638    3859 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I1212 15:15:49.485684    3859 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:15:49.486028    3859 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:15:49.486070    3859 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:15:49.489535    3859 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-449000-m02" context rescaled to 1 replicas
	I1212 15:15:49.489560    3859 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:15:49.510213    3859 out.go:177] * Verifying Kubernetes components...
	I1212 15:15:49.492641    3859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 15:15:49.494600    3859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51509
	I1212 15:15:49.539606    3859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 15:15:49.552128    3859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 15:15:49.552412    3859 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:15:49.552826    3859 main.go:141] libmachine: Using API Version  1
	I1212 15:15:49.552834    3859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:15:49.553057    3859 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:15:49.553169    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I1212 15:15:49.553256    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:15:49.553340    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 3868
	I1212 15:15:49.554375    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I1212 15:15:49.554531    3859 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 15:15:49.554536    3859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 15:15:49.554543    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I1212 15:15:49.554624    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I1212 15:15:49.554694    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I1212 15:15:49.554774    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I1212 15:15:49.554846    3859 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I1212 15:15:49.680425    3859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 15:15:50.535488    3859 main.go:141] libmachine: Making call to close driver server
	I1212 15:15:50.535495    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .Close
	I1212 15:15:50.535543    3859 start.go:929] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1212 15:15:50.535618    3859 main.go:141] libmachine: Making call to close driver server
	I1212 15:15:50.535623    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .Close
	I1212 15:15:50.535708    3859 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:15:50.535711    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Closing plugin on server side
	I1212 15:15:50.535714    3859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:15:50.535728    3859 main.go:141] libmachine: Making call to close driver server
	I1212 15:15:50.535733    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .Close
	I1212 15:15:50.535772    3859 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:15:50.535785    3859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:15:50.535786    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Closing plugin on server side
	I1212 15:15:50.535791    3859 main.go:141] libmachine: Making call to close driver server
	I1212 15:15:50.535803    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .Close
	I1212 15:15:50.535883    3859 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:15:50.535889    3859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:15:50.535893    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Closing plugin on server side
	I1212 15:15:50.535950    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Closing plugin on server side
	I1212 15:15:50.535957    3859 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:15:50.535963    3859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:15:50.536614    3859 api_server.go:52] waiting for apiserver process to appear ...
	I1212 15:15:50.536651    3859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 15:15:50.545041    3859 main.go:141] libmachine: Making call to close driver server
	I1212 15:15:50.545049    3859 main.go:141] libmachine: (multinode-449000-m02) Calling .Close
	I1212 15:15:50.545240    3859 main.go:141] libmachine: (multinode-449000-m02) DBG | Closing plugin on server side
	I1212 15:15:50.545242    3859 main.go:141] libmachine: Successfully made call to close driver server
	I1212 15:15:50.545248    3859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 15:15:50.571664    3859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 15:15:50.553791    3859 api_server.go:72] duration metric: took 1.064222011s to wait for apiserver process to appear ...
	I1212 15:15:50.629259    3859 api_server.go:88] waiting for apiserver healthz status ...
	I1212 15:15:50.629259    3859 addons.go:502] enable addons completed in 1.245226439s: enabled=[storage-provisioner default-storageclass]
	I1212 15:15:50.629292    3859 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I1212 15:15:50.695911    3859 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I1212 15:15:50.697078    3859 api_server.go:141] control plane version: v1.28.4
	I1212 15:15:50.697087    3859 api_server.go:131] duration metric: took 67.822978ms to wait for apiserver health ...
	I1212 15:15:50.697097    3859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 15:15:50.703225    3859 system_pods.go:59] 5 kube-system pods found
	I1212 15:15:50.703243    3859 system_pods.go:61] "etcd-multinode-449000-m02" [07ffac64-3dbc-4c54-a799-518ff6984aa8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 15:15:50.703249    3859 system_pods.go:61] "kube-apiserver-multinode-449000-m02" [8276382e-9283-4b55-aab3-4840eb43c579] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 15:15:50.703255    3859 system_pods.go:61] "kube-controller-manager-multinode-449000-m02" [3dee598e-4f72-4d54-a0c9-dc51b8a47ae9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 15:15:50.703261    3859 system_pods.go:61] "kube-scheduler-multinode-449000-m02" [b7bf1ff4-1e02-4c91-8f84-2faa324709cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 15:15:50.703264    3859 system_pods.go:61] "storage-provisioner" [4d1b340d-b96c-42ae-a1a6-eb20e0a93d03] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1212 15:15:50.703270    3859 system_pods.go:74] duration metric: took 6.170113ms to wait for pod list to return data ...
	I1212 15:15:50.703275    3859 kubeadm.go:581] duration metric: took 1.213709528s to wait for : map[apiserver:true system_pods:true] ...
	I1212 15:15:50.703283    3859 node_conditions.go:102] verifying NodePressure condition ...
	I1212 15:15:50.705589    3859 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 15:15:50.705600    3859 node_conditions.go:123] node cpu capacity is 2
	I1212 15:15:50.705610    3859 node_conditions.go:105] duration metric: took 2.324569ms to run NodePressure ...
	I1212 15:15:50.705616    3859 start.go:228] waiting for startup goroutines ...
	I1212 15:15:50.705619    3859 start.go:233] waiting for cluster config update ...
	I1212 15:15:50.705626    3859 start.go:242] writing updated cluster config ...
	I1212 15:15:50.705928    3859 ssh_runner.go:195] Run: rm -f paused
	I1212 15:15:50.745559    3859 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1212 15:15:50.768258    3859 out.go:177] * Done! kubectl is now configured to use "multinode-449000-m02" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:13:51 UTC, ends at Tue 2023-12-12 23:15:57 UTC. --
	Dec 12 23:14:15 multinode-449000 dockerd[829]: time="2023-12-12T23:14:15.314278842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:15 multinode-449000 dockerd[829]: time="2023-12-12T23:14:15.314291171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:15 multinode-449000 dockerd[829]: time="2023-12-12T23:14:15.314298450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:17 multinode-449000 cri-dockerd[1027]: time="2023-12-12T23:14:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/66b3849798a9110a57b64253bbb603af2ba17728dc7eaf9e4f48ec5c4fa8f726/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.518746266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.519117995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.519194808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:17 multinode-449000 dockerd[829]: time="2023-12-12T23:14:17.519251920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003640689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003685990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003706098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.003715914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 cri-dockerd[1027]: time="2023-12-12T23:14:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/416854ec1af27a500468dfec9544e23421e8b31d5496b11afcfe0709cb95ca3a/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354256453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354453693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354518110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:22 multinode-449000 dockerd[829]: time="2023-12-12T23:14:22.354650589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:45 multinode-449000 dockerd[823]: time="2023-12-12T23:14:45.435126785Z" level=info msg="ignoring event" container=e5afc68eedda9c89ab00c18198f9921e29ddb8d3dd6e5e0db0071016254b42a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 12 23:14:45 multinode-449000 dockerd[829]: time="2023-12-12T23:14:45.435644284Z" level=info msg="shim disconnected" id=e5afc68eedda9c89ab00c18198f9921e29ddb8d3dd6e5e0db0071016254b42a3 namespace=moby
	Dec 12 23:14:45 multinode-449000 dockerd[829]: time="2023-12-12T23:14:45.435746157Z" level=warning msg="cleaning up after shim disconnected" id=e5afc68eedda9c89ab00c18198f9921e29ddb8d3dd6e5e0db0071016254b42a3 namespace=moby
	Dec 12 23:14:45 multinode-449000 dockerd[829]: time="2023-12-12T23:14:45.435756338Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 12 23:15:00 multinode-449000 dockerd[829]: time="2023-12-12T23:15:00.392087271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:15:00 multinode-449000 dockerd[829]: time="2023-12-12T23:15:00.392215038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:15:00 multinode-449000 dockerd[829]: time="2023-12-12T23:15:00.392233003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:15:00 multinode-449000 dockerd[829]: time="2023-12-12T23:15:00.392243733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	14f74fd6af347       6e38f40d628db                                                                              57 seconds ago       Running             storage-provisioner       2                   7cffdc22a3f43       storage-provisioner
	94e368aff21e4       ead0a4a53df89                                                                              About a minute ago   Running             coredns                   1                   416854ec1af27       coredns-5dd5756b68-gbw2q
	17be0784b8346       c7d1297425461                                                                              About a minute ago   Running             kindnet-cni               1                   66b3849798a91       kindnet-zkv5v
	e5afc68eedda9       6e38f40d628db                                                                              About a minute ago   Exited              storage-provisioner       1                   7cffdc22a3f43       storage-provisioner
	0da1678ef4c24       83f6cc407eed8                                                                              About a minute ago   Running             kube-proxy                1                   fd8c1a2625482       kube-proxy-hxq22
	72d03f717cc24       e3db313c6dbc0                                                                              About a minute ago   Running             kube-scheduler            1                   a1064c36cfb9f       kube-scheduler-multinode-449000
	375931cc49b62       73deb9a3f7025                                                                              About a minute ago   Running             etcd                      1                   efaed44d77b68       etcd-multinode-449000
	641d4dcee3a2e       d058aa5ab969c                                                                              About a minute ago   Running             kube-controller-manager   1                   f735eb419a518       kube-controller-manager-multinode-449000
	7e9188da4ac19       7fe0e6f37db33                                                                              About a minute ago   Running             kube-apiserver            1                   a224a0a848c57       kube-apiserver-multinode-449000
	95bc5fcd783f5       ead0a4a53df89                                                                              2 minutes ago        Exited              coredns                   0                   29a2e0536a84a       coredns-5dd5756b68-gbw2q
	58bbe956bbc01       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   2 minutes ago        Exited              kindnet-cni               0                   58468ea0d3365       kindnet-zkv5v
	bc270a1f54f31       83f6cc407eed8                                                                              2 minutes ago        Exited              kube-proxy                0                   8189af807d9f1       kube-proxy-hxq22
	f52a90b7997c0       e3db313c6dbc0                                                                              2 minutes ago        Exited              kube-scheduler            0                   4a6892d4d8341       kube-scheduler-multinode-449000
	cbf4f71244550       73deb9a3f7025                                                                              2 minutes ago        Exited              etcd                      0                   de90edd09b0ec       etcd-multinode-449000
	d57c6b9df1bf2       7fe0e6f37db33                                                                              2 minutes ago        Exited              kube-apiserver            0                   e22fa4a926f7b       kube-apiserver-multinode-449000
	a65940e255b01       d058aa5ab969c                                                                              2 minutes ago        Exited              kube-controller-manager   0                   e84049d10a454       kube-controller-manager-multinode-449000
	
	* 
	* ==> coredns [94e368aff21e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36626 - 20132 "HINFO IN 4050060911229301056.5380516612431628534. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011185175s
	
	* 
	* ==> coredns [95bc5fcd783f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35091 - 44462 "HINFO IN 6377447879366584547.718696205685487622. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.013431538s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-449000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-449000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-449000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T15_13_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-449000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:15:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:14:24 +0000   Tue, 12 Dec 2023 23:14:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-449000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d39a0d33c3541cc99d09ae9cba43e45
	  System UUID:                9fde11ee-0000-0000-8111-f01898ef957c
	  Boot ID:                    c17ee9e4-2b44-420e-a492-b4d2402f4d1c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gbw2q                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m40s
	  kube-system                 etcd-multinode-449000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m53s
	  kube-system                 kindnet-zkv5v                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m40s
	  kube-system                 kube-apiserver-multinode-449000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kube-controller-manager-multinode-449000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-proxy-hxq22                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-scheduler-multinode-449000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m39s                kube-proxy       
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  NodeHasSufficientPID     2m53s                kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m53s                kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s                kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m53s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m41s                node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	  Normal  NodeReady                2m30s                kubelet          Node multinode-449000 status is now: NodeReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)  kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                  node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.028530] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +5.014314] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.347696] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.037420] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.885062] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.012553] systemd-fstab-generator[512]: Ignoring "noauto" for root device
	[  +0.083231] systemd-fstab-generator[523]: Ignoring "noauto" for root device
	[  +0.766335] systemd-fstab-generator[739]: Ignoring "noauto" for root device
	[  +0.212137] systemd-fstab-generator[779]: Ignoring "noauto" for root device
	[  +0.088669] systemd-fstab-generator[790]: Ignoring "noauto" for root device
	[  +0.100934] systemd-fstab-generator[803]: Ignoring "noauto" for root device
	[  +1.386458] systemd-fstab-generator[972]: Ignoring "noauto" for root device
	[  +0.090727] systemd-fstab-generator[983]: Ignoring "noauto" for root device
	[  +0.100127] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +0.094869] systemd-fstab-generator[1005]: Ignoring "noauto" for root device
	[  +0.105675] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1262]: Ignoring "noauto" for root device
	[  +0.228992] kauditd_printk_skb: 69 callbacks suppressed
	
	* 
	* ==> etcd [375931cc49b6] <==
	* {"level":"info","ts":"2023-12-12T23:14:11.766019Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:14:11.766039Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:14:11.765892Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:14:11.766126Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:14:11.766132Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:14:11.766276Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:14:11.766283Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:14:11.766471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2023-12-12T23:14:11.76651Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2023-12-12T23:14:11.766567Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:11.766586Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:13.042583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:13.042696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:13.042817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:13.042901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.042952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.043079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.043129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T23:14:13.044064Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-449000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:13.044317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:13.045133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T23:14:13.045337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:13.0461Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:13.046181Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:13.046691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [cbf4f7124455] <==
	* {"level":"info","ts":"2023-12-12T23:13:00.594053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T23:13:00.594813Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-449000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:13:00.596408Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:13:00.597065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:13:00.597176Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.597273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:13:00.601498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T23:13:00.601865Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:13:00.601875Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:13:00.623742Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.623931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:00.624046Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:13:17.553066Z","caller":"traceutil/trace.go:171","msg":"trace[473893054] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"111.978423ms","start":"2023-12-12T23:13:17.440922Z","end":"2023-12-12T23:13:17.5529Z","steps":["trace[473893054] 'process raft request'  (duration: 37.599328ms)","trace[473893054] 'compare'  (duration: 74.277806ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:13:17.553742Z","caller":"traceutil/trace.go:171","msg":"trace[91434881] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"111.636257ms","start":"2023-12-12T23:13:17.442093Z","end":"2023-12-12T23:13:17.553729Z","steps":["trace[91434881] 'process raft request'  (duration: 111.32696ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:13:35.183429Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T23:13:35.183471Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-449000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2023-12-12T23:13:35.183566Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:13:35.183635Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:13:35.197401Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:13:35.197445Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T23:13:35.197493Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2023-12-12T23:13:35.198654Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:13:35.198691Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T23:13:35.198697Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-449000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	* 
	* ==> kernel <==
	*  23:15:58 up 2 min,  0 users,  load average: 0.10, 0.08, 0.03
	Linux multinode-449000 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [17be0784b834] <==
	* I1212 23:14:17.746528       1 main.go:116] setting mtu 1500 for CNI 
	I1212 23:14:17.746558       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 23:14:17.746578       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 23:14:18.045367       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:14:18.045594       1 main.go:227] handling current node
	I1212 23:14:28.058397       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:14:28.058431       1 main.go:227] handling current node
	I1212 23:14:38.067441       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:14:38.067582       1 main.go:227] handling current node
	I1212 23:14:48.070735       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:14:48.070939       1 main.go:227] handling current node
	I1212 23:14:58.079274       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:14:58.079450       1 main.go:227] handling current node
	I1212 23:15:08.082689       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:15:08.082752       1 main.go:227] handling current node
	I1212 23:15:18.091361       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:15:18.091395       1 main.go:227] handling current node
	I1212 23:15:28.103509       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:15:28.103523       1 main.go:227] handling current node
	I1212 23:15:38.109970       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:15:38.110004       1 main.go:227] handling current node
	I1212 23:15:48.113058       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:15:48.113147       1 main.go:227] handling current node
	I1212 23:15:58.124685       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:15:58.124701       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [58bbe956bbc0] <==
	* I1212 23:13:23.520861       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 23:13:23.520913       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 23:13:23.521005       1 main.go:116] setting mtu 1500 for CNI 
	I1212 23:13:23.521018       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 23:13:23.521036       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 23:13:23.724964       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:13:23.725050       1 main.go:227] handling current node
	I1212 23:13:33.727761       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 23:13:33.727777       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7e9188da4ac1] <==
	* I1212 23:14:14.064715       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 23:14:14.031369       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1212 23:14:14.031459       1 aggregator.go:164] waiting for initial CRD sync...
	I1212 23:14:14.031483       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1212 23:14:14.088133       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:14.125243       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1212 23:14:14.130678       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 23:14:14.130964       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 23:14:14.131415       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:14.131454       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:14.132094       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 23:14:14.132136       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 23:14:14.132384       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:14:14.132903       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:14.132934       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:14.132939       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:14.132942       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:14.132946       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:15.036762       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:16.454625       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:16.534377       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:16.550673       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:16.593258       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:16.598012       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:15:17.884379       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [d57c6b9df1bf] <==
	* W1212 23:13:35.192081       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192095       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192105       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192123       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192138       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192146       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192161       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192178       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192183       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192200       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192205       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192228       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192232       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192250       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192262       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192271       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192285       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192302       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192317       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192323       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.191582       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192343       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192363       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 23:13:35.192396       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1212 23:13:35.207051       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-controller-manager [641d4dcee3a2] <==
	* I1212 23:14:27.097050       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-449000\" does not exist"
	I1212 23:14:27.101913       1 shared_informer.go:318] Caches are synced for node
	I1212 23:14:27.101984       1 range_allocator.go:174] "Sending events to api server"
	I1212 23:14:27.102118       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1212 23:14:27.102225       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1212 23:14:27.102281       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1212 23:14:27.109462       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 23:14:27.129639       1 shared_informer.go:318] Caches are synced for TTL
	I1212 23:14:27.143921       1 shared_informer.go:318] Caches are synced for taint
	I1212 23:14:27.144393       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1212 23:14:27.144793       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1212 23:14:27.144962       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-449000"
	I1212 23:14:27.145248       1 taint_manager.go:210] "Sending events to api server"
	I1212 23:14:27.144480       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 23:14:27.146443       1 event.go:307] "Event occurred" object="multinode-449000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-449000 event: Registered Node multinode-449000 in Controller"
	I1212 23:14:27.146609       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 23:14:27.183069       1 shared_informer.go:318] Caches are synced for GC
	I1212 23:14:27.188968       1 shared_informer.go:318] Caches are synced for stateful set
	I1212 23:14:27.195909       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 23:14:27.222794       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:14:27.244316       1 shared_informer.go:318] Caches are synced for daemon sets
	I1212 23:14:27.246695       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:14:27.552889       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:14:27.553092       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:14:27.578946       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [a65940e255b0] <==
	* I1212 23:13:16.536339       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:13:16.580270       1 shared_informer.go:318] Caches are synced for deployment
	I1212 23:13:16.583892       1 shared_informer.go:318] Caches are synced for disruption
	I1212 23:13:16.584993       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:13:16.625708       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1212 23:13:16.965091       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:13:16.991675       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:13:16.991709       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:13:17.139253       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zkv5v"
	I1212 23:13:17.141698       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hxq22"
	I1212 23:13:17.333986       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 23:13:17.557309       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pk47r"
	I1212 23:13:17.557360       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 23:13:17.569686       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gbw2q"
	I1212 23:13:17.589493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="255.869106ms"
	I1212 23:13:17.604752       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pk47r"
	I1212 23:13:17.611415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.817254ms"
	I1212 23:13:17.624419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.85131ms"
	I1212 23:13:17.624716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.156µs"
	I1212 23:13:27.969254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.807µs"
	I1212 23:13:27.989675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.171µs"
	I1212 23:13:29.737447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.07µs"
	I1212 23:13:29.778766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.904788ms"
	I1212 23:13:29.778912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.956µs"
	I1212 23:13:31.438926       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [0da1678ef4c2] <==
	* I1212 23:14:15.224642       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:15.250649       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 23:14:15.308832       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:15.308873       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:15.310476       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:15.310850       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:15.311287       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:15.311316       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:15.314039       1 config.go:188] "Starting service config controller"
	I1212 23:14:15.314498       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:15.314569       1 config.go:315] "Starting node config controller"
	I1212 23:14:15.314593       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:15.316013       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:15.316057       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:15.415374       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:14:15.415435       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:15.416613       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [bc270a1f54f3] <==
	* I1212 23:13:18.012684       1 server_others.go:69] "Using iptables proxy"
	I1212 23:13:18.037892       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 23:13:18.072494       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:13:18.072509       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:13:18.074981       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:13:18.075043       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:13:18.075202       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:13:18.075209       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:13:18.076295       1 config.go:188] "Starting service config controller"
	I1212 23:13:18.076303       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:13:18.076315       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:13:18.076318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:13:18.076333       1 config.go:315] "Starting node config controller"
	I1212 23:13:18.076335       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:13:18.177081       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:13:18.177098       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:13:18.177117       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [72d03f717cc2] <==
	* I1212 23:14:12.618876       1 serving.go:348] Generated self-signed cert in-memory
	W1212 23:14:14.077358       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:14:14.077437       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:14.077459       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:14:14.077471       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:14:14.090493       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 23:14:14.090793       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:14.092618       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 23:14:14.092701       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:14:14.093030       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:14:14.092764       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:14:14.194216       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f52a90b7997c] <==
	* W1212 23:13:01.627882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:13:01.627969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:13:01.628097       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:13:01.628146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:13:01.628239       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:13:01.628286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:13:01.628384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:13:01.628478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 23:13:02.458336       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:13:02.458362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:13:02.467319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:13:02.467352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:13:02.496299       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:13:02.496382       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:13:02.572595       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:13:02.572751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 23:13:02.707713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:13:02.707895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:13:02.722617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:13:02.722657       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:13:04.511351       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:13:35.140016       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1212 23:13:35.140121       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1212 23:13:35.140233       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 23:13:35.140522       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:13:51 UTC, ends at Tue 2023-12-12 23:15:59 UTC. --
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.374071    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-lib-modules\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.374160    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d330b0b4-7d3f-4386-a72d-cb235945c494-lib-modules\") pod \"kube-proxy-hxq22\" (UID: \"d330b0b4-7d3f-4386-a72d-cb235945c494\") " pod="kube-system/kube-proxy-hxq22"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.374215    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92e2a49a-0055-4ae7-a167-fb51b4275183-cni-cfg\") pod \"kindnet-zkv5v\" (UID: \"92e2a49a-0055-4ae7-a167-fb51b4275183\") " pod="kube-system/kindnet-zkv5v"
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.374634    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.374789    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:14.874754555 +0000 UTC m=+4.704404311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.877665    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: E1212 23:14:14.877772    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:15.877760314 +0000 UTC m=+5.707410069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:14 multinode-449000 kubelet[1268]: I1212 23:14:14.890079    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8c1a2625482be1dd7888a747109baf826ed6eb5c387c599b9d708506c7a49c"
	Dec 12 23:14:15 multinode-449000 kubelet[1268]: E1212 23:14:15.401265    1268 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Dec 12 23:14:15 multinode-449000 kubelet[1268]: E1212 23:14:15.885579    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:15 multinode-449000 kubelet[1268]: E1212 23:14:15.885630    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:17.885619912 +0000 UTC m=+7.715269668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: I1212 23:14:17.468943    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b3849798a9110a57b64253bbb603af2ba17728dc7eaf9e4f48ec5c4fa8f726"
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: I1212 23:14:17.477215    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cffdc22a3f43f092b053882267f41dc2642fc2be77bb6c91f905f6404cec1a0"
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: E1212 23:14:17.477562    1268 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-gbw2q" podUID="09d20e99-6d1a-46d5-858f-71585ab9e532"
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: E1212 23:14:17.900843    1268 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:14:17 multinode-449000 kubelet[1268]: E1212 23:14:17.900894    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume podName:09d20e99-6d1a-46d5-858f-71585ab9e532 nodeName:}" failed. No retries permitted until 2023-12-12 23:14:21.90088354 +0000 UTC m=+11.730533296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09d20e99-6d1a-46d5-858f-71585ab9e532-config-volume") pod "coredns-5dd5756b68-gbw2q" (UID: "09d20e99-6d1a-46d5-858f-71585ab9e532") : object "kube-system"/"coredns" not registered
	Dec 12 23:14:19 multinode-449000 kubelet[1268]: E1212 23:14:19.349665    1268 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-gbw2q" podUID="09d20e99-6d1a-46d5-858f-71585ab9e532"
	Dec 12 23:14:45 multinode-449000 kubelet[1268]: I1212 23:14:45.708973    1268 scope.go:117] "RemoveContainer" containerID="349aceac4c902d41241325dafbc0d0374e2e6d70243a8d394bb4cd601f95ca24"
	Dec 12 23:14:45 multinode-449000 kubelet[1268]: I1212 23:14:45.709093    1268 scope.go:117] "RemoveContainer" containerID="e5afc68eedda9c89ab00c18198f9921e29ddb8d3dd6e5e0db0071016254b42a3"
	Dec 12 23:14:45 multinode-449000 kubelet[1268]: E1212 23:14:45.709284    1268 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(11d647a8-b7f7-411a-b861-f3d109085770)\"" pod="kube-system/storage-provisioner" podUID="11d647a8-b7f7-411a-b861-f3d109085770"
	Dec 12 23:15:00 multinode-449000 kubelet[1268]: I1212 23:15:00.348799    1268 scope.go:117] "RemoveContainer" containerID="e5afc68eedda9c89ab00c18198f9921e29ddb8d3dd6e5e0db0071016254b42a3"
	Dec 12 23:15:10 multinode-449000 kubelet[1268]: E1212 23:15:10.370388    1268 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:15:10 multinode-449000 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:15:10 multinode-449000 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:15:10 multinode-449000 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [14f74fd6af34] <==
	* I1212 23:15:00.476464       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:15:00.489783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:15:00.490189       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:15:17.886165       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:15:17.886577       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3abdb08b-1824-4529-8878-e42e5ba065dd", APIVersion:"v1", ResourceVersion:"547", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-449000_9ca0f8e0-663c-49c7-a490-33dd82749b4a became leader
	I1212 23:15:17.886788       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-449000_9ca0f8e0-663c-49c7-a490-33dd82749b4a!
	I1212 23:15:17.988055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-449000_9ca0f8e0-663c-49c7-a490-33dd82749b4a!
	
	* 
	* ==> storage-provisioner [e5afc68eedda] <==
	* I1212 23:14:15.419211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 23:14:45.424676       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-449000 -n multinode-449000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-449000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (86.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (15.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : exit status 90 (15.453672767s)

                                                
                                                
-- stdout --
	* [calico-246000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17777
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node calico-246000 in cluster calico-246000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:33:15.610297    5584 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:33:15.610734    5584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:33:15.610743    5584 out.go:309] Setting ErrFile to fd 2...
	I1212 15:33:15.610750    5584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:33:15.610955    5584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:33:15.612847    5584 out.go:303] Setting JSON to false
	I1212 15:33:15.643540    5584 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3766,"bootTime":1702420229,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 15:33:15.643683    5584 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:33:15.776960    5584 out.go:177] * [calico-246000] minikube v1.32.0 on Darwin 14.2
	I1212 15:33:15.799003    5584 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 15:33:15.799027    5584 notify.go:220] Checking for updates...
	I1212 15:33:15.840774    5584 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:33:15.862875    5584 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:33:15.883994    5584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:33:15.904996    5584 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:33:15.978986    5584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:33:16.032630    5584 config.go:182] Loaded profile config "kindnet-246000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:33:16.032762    5584 config.go:182] Loaded profile config "multinode-449000-m01": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:33:16.032861    5584 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:33:16.061907    5584 out.go:177] * Using the hyperkit driver based on user configuration
	I1212 15:33:16.119894    5584 start.go:298] selected driver: hyperkit
	I1212 15:33:16.119907    5584 start.go:902] validating driver "hyperkit" against <nil>
	I1212 15:33:16.119923    5584 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:33:16.123120    5584 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:33:16.123230    5584 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 15:33:16.131284    5584 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 15:33:16.135340    5584 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:33:16.135378    5584 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 15:33:16.135409    5584 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 15:33:16.135605    5584 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 15:33:16.135690    5584 cni.go:84] Creating CNI manager for "calico"
	I1212 15:33:16.135707    5584 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1212 15:33:16.135718    5584 start_flags.go:323] config:
	{Name:calico-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-246000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:33:16.135871    5584 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:33:16.156967    5584 out.go:177] * Starting control plane node calico-246000 in cluster calico-246000
	I1212 15:33:16.214968    5584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:33:16.215013    5584 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:33:16.215030    5584 cache.go:56] Caching tarball of preloaded images
	I1212 15:33:16.215135    5584 preload.go:174] Found /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:33:16.215144    5584 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:33:16.215227    5584 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/calico-246000/config.json ...
	I1212 15:33:16.215246    5584 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/calico-246000/config.json: {Name:mk5b6e5851786e374ff2493ff460b095ba6e52da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:33:16.215537    5584 start.go:365] acquiring machines lock for calico-246000: {Name:mk51496c390b032727acf9b9a5f67e389f19ec26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 15:33:16.215587    5584 start.go:369] acquired machines lock for "calico-246000" in 39.977µs
	I1212 15:33:16.215609    5584 start.go:93] Provisioning new machine with config: &{Name:calico-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:calico-246000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:33:16.215655    5584 start.go:125] createHost starting for "" (driver="hyperkit")
	I1212 15:33:16.236960    5584 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1212 15:33:16.237283    5584 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:33:16.237344    5584 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:33:16.245786    5584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53278
	I1212 15:33:16.246266    5584 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:33:16.246686    5584 main.go:141] libmachine: Using API Version  1
	I1212 15:33:16.246698    5584 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:33:16.246900    5584 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:33:16.246996    5584 main.go:141] libmachine: (calico-246000) Calling .GetMachineName
	I1212 15:33:16.247078    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:16.247164    5584 start.go:159] libmachine.API.Create for "calico-246000" (driver="hyperkit")
	I1212 15:33:16.247189    5584 client.go:168] LocalClient.Create starting
	I1212 15:33:16.247221    5584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem
	I1212 15:33:16.247273    5584 main.go:141] libmachine: Decoding PEM data...
	I1212 15:33:16.247289    5584 main.go:141] libmachine: Parsing certificate...
	I1212 15:33:16.247346    5584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem
	I1212 15:33:16.247385    5584 main.go:141] libmachine: Decoding PEM data...
	I1212 15:33:16.247396    5584 main.go:141] libmachine: Parsing certificate...
	I1212 15:33:16.247409    5584 main.go:141] libmachine: Running pre-create checks...
	I1212 15:33:16.247420    5584 main.go:141] libmachine: (calico-246000) Calling .PreCreateCheck
	I1212 15:33:16.247494    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:16.247689    5584 main.go:141] libmachine: (calico-246000) Calling .GetConfigRaw
	I1212 15:33:16.274491    5584 main.go:141] libmachine: Creating machine...
	I1212 15:33:16.274528    5584 main.go:141] libmachine: (calico-246000) Calling .Create
	I1212 15:33:16.274696    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:16.274980    5584 main.go:141] libmachine: (calico-246000) DBG | I1212 15:33:16.274673    5592 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:33:16.275108    5584 main.go:141] libmachine: (calico-246000) Downloading /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17777-1259/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 15:33:16.448838    5584 main.go:141] libmachine: (calico-246000) DBG | I1212 15:33:16.448771    5592 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/id_rsa...
	I1212 15:33:16.757087    5584 main.go:141] libmachine: (calico-246000) DBG | I1212 15:33:16.756993    5592 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/calico-246000.rawdisk...
	I1212 15:33:16.757108    5584 main.go:141] libmachine: (calico-246000) DBG | Writing magic tar header
	I1212 15:33:16.757119    5584 main.go:141] libmachine: (calico-246000) DBG | Writing SSH key tar header
	I1212 15:33:16.757595    5584 main.go:141] libmachine: (calico-246000) DBG | I1212 15:33:16.757553    5592 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000 ...
	I1212 15:33:17.091833    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:17.091852    5584 main.go:141] libmachine: (calico-246000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/hyperkit.pid
	I1212 15:33:17.091866    5584 main.go:141] libmachine: (calico-246000) DBG | Using UUID d3086f26-9946-11ee-84ae-f01898ef957c
	I1212 15:33:17.118814    5584 main.go:141] libmachine: (calico-246000) DBG | Generated MAC be:a:8f:f8:67:68
	I1212 15:33:17.118836    5584 main.go:141] libmachine: (calico-246000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=calico-246000
	I1212 15:33:17.118891    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d3086f26-9946-11ee-84ae-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000282360)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1212 15:33:17.118943    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d3086f26-9946-11ee-84ae-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000282360)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1212 15:33:17.118998    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d3086f26-9946-11ee-84ae-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/calico-246000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/bzimage,/Users/jenkins/minikube-integration/17777-1259/.minikube/machine
s/calico-246000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=calico-246000"}
	I1212 15:33:17.119049    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d3086f26-9946-11ee-84ae-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/calico-246000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/tty,log=/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/console-ring -f kexec,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/bzimage,/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=calico-246000"
	I1212 15:33:17.119070    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 15:33:17.122428    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 DEBUG: hyperkit: Pid is 5593
	I1212 15:33:17.123180    5584 main.go:141] libmachine: (calico-246000) DBG | Attempt 0
	I1212 15:33:17.123231    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:17.123289    5584 main.go:141] libmachine: (calico-246000) DBG | hyperkit pid from json: 5593
	I1212 15:33:17.124817    5584 main.go:141] libmachine: (calico-246000) DBG | Searching for be:a:8f:f8:67:68 in /var/db/dhcpd_leases ...
	I1212 15:33:17.124916    5584 main.go:141] libmachine: (calico-246000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 15:33:17.124935    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:f0:88:28:e9:f0 ID:1,2e:f0:88:28:e9:f0 Lease:0x657a3f22}
	I1212 15:33:17.125034    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:fc:c5:79:3:fe ID:1,e6:fc:c5:79:3:fe Lease:0x657a3ef2}
	I1212 15:33:17.125060    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:41:62:a7:55:94 ID:1,9a:41:62:a7:55:94 Lease:0x6578ed96}
	I1212 15:33:17.125073    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:86:2a:19:7f:73:11 ID:1,86:2a:19:7f:73:11 Lease:0x6578ed51}
	I1212 15:33:17.125088    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:12:a7:c5:2a:83:68 ID:1,12:a7:c5:2a:83:68 Lease:0x657a3e95}
	I1212 15:33:17.125098    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ca:17:c1:a2:b3:9a ID:1,ca:17:c1:a2:b3:9a Lease:0x657a3e6b}
	I1212 15:33:17.125106    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:de:91:ef:2f:c8:e6 ID:1,de:91:ef:2f:c8:e6 Lease:0x657a3e4e}
	I1212 15:33:17.125120    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:c2:1:1c:d2:d:70 ID:1,c2:1:1c:d2:d:70 Lease:0x657a3d51}
	I1212 15:33:17.125130    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:b2:38:3b:1c:7b:20 ID:1,b2:38:3b:1c:7b:20 Lease:0x657a3d1b}
	I1212 15:33:17.125138    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:9e:62:f3:45:4a:1c ID:1,9e:62:f3:45:4a:1c Lease:0x657a3d0f}
	I1212 15:33:17.125147    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ca:3e:b:b3:65:c6 ID:1,ca:3e:b:b3:65:c6 Lease:0x6578eb8e}
	I1212 15:33:17.125156    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:c9:34:3e:5:c ID:1,16:c9:34:3e:5:c Lease:0x657a3ce6}
	I1212 15:33:17.125164    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:8:ff:ec:e2:b0 ID:1,9a:8:ff:ec:e2:b0 Lease:0x657a3cc7}
	I1212 15:33:17.125172    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:b2:4:5e:b3:6c:e8 ID:1,b2:4:5e:b3:6c:e8 Lease:0x657a3caf}
	I1212 15:33:17.125179    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:e2:5f:6f:53:5d:21 ID:1,e2:5f:6f:53:5d:21 Lease:0x657a3c43}
	I1212 15:33:17.125188    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:2:cb:91:2c:14:6a ID:1,2:cb:91:2c:14:6a Lease:0x657a3bd7}
	I1212 15:33:17.125201    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:97:74:1a:2f:1 ID:1,fa:97:74:1a:2f:1 Lease:0x657a3b9f}
	I1212 15:33:17.125227    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ea:8:9b:fa:1f:1b ID:1,ea:8:9b:fa:1f:1b Lease:0x657a3b0a}
	I1212 15:33:17.125242    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:33:17.125257    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:33:17.125266    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:33:17.125281    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:33:17.125292    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:33:17.125303    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:33:17.125314    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:33:17.125329    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:33:17.125339    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:33:17.125346    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:33:17.125354    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:33:17.125365    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:33:17.125378    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:33:17.130337    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 15:33:17.139178    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 15:33:17.140037    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:33:17.140062    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:33:17.140077    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:33:17.140088    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:33:17.534243    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 15:33:17.534259    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 15:33:17.638400    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 15:33:17.638435    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 15:33:17.638446    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 15:33:17.638462    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 15:33:17.639201    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 15:33:17.639223    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 15:33:19.126145    5584 main.go:141] libmachine: (calico-246000) DBG | Attempt 1
	I1212 15:33:19.126162    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:19.126267    5584 main.go:141] libmachine: (calico-246000) DBG | hyperkit pid from json: 5593
	I1212 15:33:19.127109    5584 main.go:141] libmachine: (calico-246000) DBG | Searching for be:a:8f:f8:67:68 in /var/db/dhcpd_leases ...
	I1212 15:33:19.127193    5584 main.go:141] libmachine: (calico-246000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 15:33:19.127213    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:f0:88:28:e9:f0 ID:1,2e:f0:88:28:e9:f0 Lease:0x657a3f22}
	I1212 15:33:19.127238    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:fc:c5:79:3:fe ID:1,e6:fc:c5:79:3:fe Lease:0x657a3ef2}
	I1212 15:33:19.127263    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:41:62:a7:55:94 ID:1,9a:41:62:a7:55:94 Lease:0x6578ed96}
	I1212 15:33:19.127301    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:86:2a:19:7f:73:11 ID:1,86:2a:19:7f:73:11 Lease:0x6578ed51}
	I1212 15:33:19.127320    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:12:a7:c5:2a:83:68 ID:1,12:a7:c5:2a:83:68 Lease:0x657a3e95}
	I1212 15:33:19.127338    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ca:17:c1:a2:b3:9a ID:1,ca:17:c1:a2:b3:9a Lease:0x657a3e6b}
	I1212 15:33:19.127359    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:de:91:ef:2f:c8:e6 ID:1,de:91:ef:2f:c8:e6 Lease:0x657a3e4e}
	I1212 15:33:19.127370    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:c2:1:1c:d2:d:70 ID:1,c2:1:1c:d2:d:70 Lease:0x657a3d51}
	I1212 15:33:19.127378    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:b2:38:3b:1c:7b:20 ID:1,b2:38:3b:1c:7b:20 Lease:0x657a3d1b}
	I1212 15:33:19.127384    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:9e:62:f3:45:4a:1c ID:1,9e:62:f3:45:4a:1c Lease:0x657a3d0f}
	I1212 15:33:19.127391    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ca:3e:b:b3:65:c6 ID:1,ca:3e:b:b3:65:c6 Lease:0x6578eb8e}
	I1212 15:33:19.127398    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:c9:34:3e:5:c ID:1,16:c9:34:3e:5:c Lease:0x657a3ce6}
	I1212 15:33:19.127408    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:8:ff:ec:e2:b0 ID:1,9a:8:ff:ec:e2:b0 Lease:0x657a3cc7}
	I1212 15:33:19.127435    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:b2:4:5e:b3:6c:e8 ID:1,b2:4:5e:b3:6c:e8 Lease:0x657a3caf}
	I1212 15:33:19.127448    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:e2:5f:6f:53:5d:21 ID:1,e2:5f:6f:53:5d:21 Lease:0x657a3c43}
	I1212 15:33:19.127458    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:2:cb:91:2c:14:6a ID:1,2:cb:91:2c:14:6a Lease:0x657a3bd7}
	I1212 15:33:19.127466    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:97:74:1a:2f:1 ID:1,fa:97:74:1a:2f:1 Lease:0x657a3b9f}
	I1212 15:33:19.127473    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ea:8:9b:fa:1f:1b ID:1,ea:8:9b:fa:1f:1b Lease:0x657a3b0a}
	I1212 15:33:19.127481    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:33:19.127488    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:33:19.127496    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:33:19.127504    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:33:19.127512    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:33:19.127520    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:33:19.127529    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:33:19.127537    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:33:19.127545    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:33:19.127555    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:33:19.127563    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:33:19.127571    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:33:19.127580    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:33:21.127494    5584 main.go:141] libmachine: (calico-246000) DBG | Attempt 2
	I1212 15:33:21.127512    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:21.127573    5584 main.go:141] libmachine: (calico-246000) DBG | hyperkit pid from json: 5593
	I1212 15:33:21.128417    5584 main.go:141] libmachine: (calico-246000) DBG | Searching for be:a:8f:f8:67:68 in /var/db/dhcpd_leases ...
	I1212 15:33:21.128482    5584 main.go:141] libmachine: (calico-246000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 15:33:21.128499    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:f0:88:28:e9:f0 ID:1,2e:f0:88:28:e9:f0 Lease:0x657a3f22}
	I1212 15:33:21.128523    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:fc:c5:79:3:fe ID:1,e6:fc:c5:79:3:fe Lease:0x657a3ef2}
	I1212 15:33:21.128536    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:41:62:a7:55:94 ID:1,9a:41:62:a7:55:94 Lease:0x6578ed96}
	I1212 15:33:21.128551    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:86:2a:19:7f:73:11 ID:1,86:2a:19:7f:73:11 Lease:0x6578ed51}
	I1212 15:33:21.128558    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:12:a7:c5:2a:83:68 ID:1,12:a7:c5:2a:83:68 Lease:0x657a3e95}
	I1212 15:33:21.128566    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ca:17:c1:a2:b3:9a ID:1,ca:17:c1:a2:b3:9a Lease:0x657a3e6b}
	I1212 15:33:21.128573    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:de:91:ef:2f:c8:e6 ID:1,de:91:ef:2f:c8:e6 Lease:0x657a3e4e}
	I1212 15:33:21.128585    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:c2:1:1c:d2:d:70 ID:1,c2:1:1c:d2:d:70 Lease:0x657a3d51}
	I1212 15:33:21.128593    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:b2:38:3b:1c:7b:20 ID:1,b2:38:3b:1c:7b:20 Lease:0x657a3d1b}
	I1212 15:33:21.128624    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:9e:62:f3:45:4a:1c ID:1,9e:62:f3:45:4a:1c Lease:0x657a3d0f}
	I1212 15:33:21.128638    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ca:3e:b:b3:65:c6 ID:1,ca:3e:b:b3:65:c6 Lease:0x6578eb8e}
	I1212 15:33:21.128648    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:c9:34:3e:5:c ID:1,16:c9:34:3e:5:c Lease:0x657a3ce6}
	I1212 15:33:21.128657    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:8:ff:ec:e2:b0 ID:1,9a:8:ff:ec:e2:b0 Lease:0x657a3cc7}
	I1212 15:33:21.128666    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:b2:4:5e:b3:6c:e8 ID:1,b2:4:5e:b3:6c:e8 Lease:0x657a3caf}
	I1212 15:33:21.128674    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:e2:5f:6f:53:5d:21 ID:1,e2:5f:6f:53:5d:21 Lease:0x657a3c43}
	I1212 15:33:21.128683    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:2:cb:91:2c:14:6a ID:1,2:cb:91:2c:14:6a Lease:0x657a3bd7}
	I1212 15:33:21.128691    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:97:74:1a:2f:1 ID:1,fa:97:74:1a:2f:1 Lease:0x657a3b9f}
	I1212 15:33:21.128699    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ea:8:9b:fa:1f:1b ID:1,ea:8:9b:fa:1f:1b Lease:0x657a3b0a}
	I1212 15:33:21.128707    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:33:21.128716    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:33:21.128724    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:33:21.128732    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:33:21.128740    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:33:21.128749    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:33:21.128763    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:33:21.128780    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:33:21.128805    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:33:21.128814    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:33:21.128825    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:33:21.128834    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:33:21.128844    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:33:22.591423    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:22 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 15:33:22.591467    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:22 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 15:33:22.591486    5584 main.go:141] libmachine: (calico-246000) DBG | 2023/12/12 15:33:22 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 15:33:23.129410    5584 main.go:141] libmachine: (calico-246000) DBG | Attempt 3
	I1212 15:33:23.129434    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:23.129534    5584 main.go:141] libmachine: (calico-246000) DBG | hyperkit pid from json: 5593
	I1212 15:33:23.130380    5584 main.go:141] libmachine: (calico-246000) DBG | Searching for be:a:8f:f8:67:68 in /var/db/dhcpd_leases ...
	I1212 15:33:23.130445    5584 main.go:141] libmachine: (calico-246000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 15:33:23.130456    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:f0:88:28:e9:f0 ID:1,2e:f0:88:28:e9:f0 Lease:0x657a3f22}
	I1212 15:33:23.130491    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:fc:c5:79:3:fe ID:1,e6:fc:c5:79:3:fe Lease:0x657a3ef2}
	I1212 15:33:23.130502    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:41:62:a7:55:94 ID:1,9a:41:62:a7:55:94 Lease:0x6578ed96}
	I1212 15:33:23.130518    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:86:2a:19:7f:73:11 ID:1,86:2a:19:7f:73:11 Lease:0x6578ed51}
	I1212 15:33:23.130540    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:12:a7:c5:2a:83:68 ID:1,12:a7:c5:2a:83:68 Lease:0x657a3e95}
	I1212 15:33:23.130558    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ca:17:c1:a2:b3:9a ID:1,ca:17:c1:a2:b3:9a Lease:0x657a3e6b}
	I1212 15:33:23.130583    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:de:91:ef:2f:c8:e6 ID:1,de:91:ef:2f:c8:e6 Lease:0x657a3e4e}
	I1212 15:33:23.130594    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:c2:1:1c:d2:d:70 ID:1,c2:1:1c:d2:d:70 Lease:0x657a3d51}
	I1212 15:33:23.130612    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:b2:38:3b:1c:7b:20 ID:1,b2:38:3b:1c:7b:20 Lease:0x657a3d1b}
	I1212 15:33:23.130632    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:9e:62:f3:45:4a:1c ID:1,9e:62:f3:45:4a:1c Lease:0x657a3d0f}
	I1212 15:33:23.130644    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ca:3e:b:b3:65:c6 ID:1,ca:3e:b:b3:65:c6 Lease:0x6578eb8e}
	I1212 15:33:23.130653    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:c9:34:3e:5:c ID:1,16:c9:34:3e:5:c Lease:0x657a3ce6}
	I1212 15:33:23.130661    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:8:ff:ec:e2:b0 ID:1,9a:8:ff:ec:e2:b0 Lease:0x657a3cc7}
	I1212 15:33:23.130667    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:b2:4:5e:b3:6c:e8 ID:1,b2:4:5e:b3:6c:e8 Lease:0x657a3caf}
	I1212 15:33:23.130674    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:e2:5f:6f:53:5d:21 ID:1,e2:5f:6f:53:5d:21 Lease:0x657a3c43}
	I1212 15:33:23.130681    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:2:cb:91:2c:14:6a ID:1,2:cb:91:2c:14:6a Lease:0x657a3bd7}
	I1212 15:33:23.130689    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:97:74:1a:2f:1 ID:1,fa:97:74:1a:2f:1 Lease:0x657a3b9f}
	I1212 15:33:23.130697    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ea:8:9b:fa:1f:1b ID:1,ea:8:9b:fa:1f:1b Lease:0x657a3b0a}
	I1212 15:33:23.130706    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:33:23.130714    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:33:23.130724    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:33:23.130732    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:33:23.130740    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:33:23.130748    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:33:23.130754    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:33:23.130769    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:33:23.130782    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:33:23.130791    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:33:23.130800    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:33:23.130808    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:33:23.130817    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:33:25.131608    5584 main.go:141] libmachine: (calico-246000) DBG | Attempt 4
	I1212 15:33:25.131634    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:25.131722    5584 main.go:141] libmachine: (calico-246000) DBG | hyperkit pid from json: 5593
	I1212 15:33:25.132654    5584 main.go:141] libmachine: (calico-246000) DBG | Searching for be:a:8f:f8:67:68 in /var/db/dhcpd_leases ...
	I1212 15:33:25.132708    5584 main.go:141] libmachine: (calico-246000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 15:33:25.132732    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:f0:88:28:e9:f0 ID:1,2e:f0:88:28:e9:f0 Lease:0x657a3f22}
	I1212 15:33:25.132758    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:fc:c5:79:3:fe ID:1,e6:fc:c5:79:3:fe Lease:0x657a3ef2}
	I1212 15:33:25.132778    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:41:62:a7:55:94 ID:1,9a:41:62:a7:55:94 Lease:0x6578ed96}
	I1212 15:33:25.132813    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:86:2a:19:7f:73:11 ID:1,86:2a:19:7f:73:11 Lease:0x6578ed51}
	I1212 15:33:25.132828    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:12:a7:c5:2a:83:68 ID:1,12:a7:c5:2a:83:68 Lease:0x657a3e95}
	I1212 15:33:25.132838    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ca:17:c1:a2:b3:9a ID:1,ca:17:c1:a2:b3:9a Lease:0x657a3e6b}
	I1212 15:33:25.132847    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:de:91:ef:2f:c8:e6 ID:1,de:91:ef:2f:c8:e6 Lease:0x657a3e4e}
	I1212 15:33:25.132858    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:c2:1:1c:d2:d:70 ID:1,c2:1:1c:d2:d:70 Lease:0x657a3d51}
	I1212 15:33:25.132866    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:b2:38:3b:1c:7b:20 ID:1,b2:38:3b:1c:7b:20 Lease:0x657a3d1b}
	I1212 15:33:25.132882    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:9e:62:f3:45:4a:1c ID:1,9e:62:f3:45:4a:1c Lease:0x657a3d0f}
	I1212 15:33:25.132904    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ca:3e:b:b3:65:c6 ID:1,ca:3e:b:b3:65:c6 Lease:0x6578eb8e}
	I1212 15:33:25.132918    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:c9:34:3e:5:c ID:1,16:c9:34:3e:5:c Lease:0x657a3ce6}
	I1212 15:33:25.132927    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:8:ff:ec:e2:b0 ID:1,9a:8:ff:ec:e2:b0 Lease:0x657a3cc7}
	I1212 15:33:25.132937    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:b2:4:5e:b3:6c:e8 ID:1,b2:4:5e:b3:6c:e8 Lease:0x657a3caf}
	I1212 15:33:25.132946    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:e2:5f:6f:53:5d:21 ID:1,e2:5f:6f:53:5d:21 Lease:0x657a3c43}
	I1212 15:33:25.132956    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:2:cb:91:2c:14:6a ID:1,2:cb:91:2c:14:6a Lease:0x657a3bd7}
	I1212 15:33:25.132973    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:97:74:1a:2f:1 ID:1,fa:97:74:1a:2f:1 Lease:0x657a3b9f}
	I1212 15:33:25.132987    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ea:8:9b:fa:1f:1b ID:1,ea:8:9b:fa:1f:1b Lease:0x657a3b0a}
	I1212 15:33:25.132998    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:3a:47:ed:bd:6e:e1 ID:1,3a:47:ed:bd:6e:e1 Lease:0x657a3ae3}
	I1212 15:33:25.133007    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:78:2:3f:65:80 ID:1,f2:78:2:3f:65:80 Lease:0x657a3ab0}
	I1212 15:33:25.133016    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ae:69:eb:53:8c:5b ID:1,ae:69:eb:53:8c:5b Lease:0x6578e85b}
	I1212 15:33:25.133029    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6a:5c:e0:d8:73:5b ID:1,6a:5c:e0:d8:73:5b Lease:0x6578e846}
	I1212 15:33:25.133038    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:22:bc:7e:11:6c:f5 ID:1,22:bc:7e:11:6c:f5 Lease:0x657a397e}
	I1212 15:33:25.133048    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fe:39:cb:bf:ae:44 ID:1,fe:39:cb:bf:ae:44 Lease:0x657a3959}
	I1212 15:33:25.133064    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9a:c9:f7:34:af:5d ID:1,9a:c9:f7:34:af:5d Lease:0x657a391e}
	I1212 15:33:25.133075    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:8d:b0:a8:5f:be ID:1,1e:8d:b0:a8:5f:be Lease:0x657a388a}
	I1212 15:33:25.133088    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:94:44:d7:9:11 ID:1,e2:94:44:d7:9:11 Lease:0x6578e6f7}
	I1212 15:33:25.133099    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:a8:f:4a:47:90 ID:1,9a:a8:f:4a:47:90 Lease:0x657a3755}
	I1212 15:33:25.133108    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:c6:cb:3c:63:0:f6 ID:1,c6:cb:3c:63:0:f6 Lease:0x657a3728}
	I1212 15:33:25.133118    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ea:48:d3:f6:3:6b ID:1,ea:48:d3:f6:3:6b Lease:0x657a3638}
	I1212 15:33:25.133129    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a35f5}
	I1212 15:33:27.133595    5584 main.go:141] libmachine: (calico-246000) DBG | Attempt 5
	I1212 15:33:27.133612    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:27.133687    5584 main.go:141] libmachine: (calico-246000) DBG | hyperkit pid from json: 5593
	I1212 15:33:27.134515    5584 main.go:141] libmachine: (calico-246000) DBG | Searching for be:a:8f:f8:67:68 in /var/db/dhcpd_leases ...
	I1212 15:33:27.134588    5584 main.go:141] libmachine: (calico-246000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I1212 15:33:27.134601    5584 main.go:141] libmachine: (calico-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:be:a:8f:f8:67:68 ID:1,be:a:8f:f8:67:68 Lease:0x657a3f45}
	I1212 15:33:27.134611    5584 main.go:141] libmachine: (calico-246000) DBG | Found match: be:a:8f:f8:67:68
	I1212 15:33:27.134622    5584 main.go:141] libmachine: (calico-246000) DBG | IP: 192.169.0.33
	I1212 15:33:27.134669    5584 main.go:141] libmachine: (calico-246000) Calling .GetConfigRaw
	I1212 15:33:27.150273    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:27.150453    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:27.150556    5584 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 15:33:27.150566    5584 main.go:141] libmachine: (calico-246000) Calling .GetState
	I1212 15:33:27.150658    5584 main.go:141] libmachine: (calico-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 15:33:27.150712    5584 main.go:141] libmachine: (calico-246000) DBG | hyperkit pid from json: 5593
	I1212 15:33:27.151601    5584 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 15:33:27.151614    5584 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 15:33:27.151638    5584 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 15:33:27.151646    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.151728    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.151811    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.151890    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.151980    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.152092    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:27.152400    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:27.152408    5584 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 15:33:27.208768    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:33:27.208783    5584 main.go:141] libmachine: Detecting the provisioner...
	I1212 15:33:27.208789    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.208915    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.209009    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.209097    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.209186    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.209304    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:27.209553    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:27.209562    5584 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 15:33:27.266956    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 15:33:27.267014    5584 main.go:141] libmachine: found compatible host: buildroot
	I1212 15:33:27.267021    5584 main.go:141] libmachine: Provisioning with buildroot...
	I1212 15:33:27.267043    5584 main.go:141] libmachine: (calico-246000) Calling .GetMachineName
	I1212 15:33:27.267186    5584 buildroot.go:166] provisioning hostname "calico-246000"
	I1212 15:33:27.267202    5584 main.go:141] libmachine: (calico-246000) Calling .GetMachineName
	I1212 15:33:27.267299    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.267423    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.267518    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.267636    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.267784    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.267938    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:27.268198    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:27.268207    5584 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-246000 && echo "calico-246000" | sudo tee /etc/hostname
	I1212 15:33:27.333449    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-246000
	
	I1212 15:33:27.333482    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.333668    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.333785    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.333889    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.333997    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.334129    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:27.334407    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:27.334419    5584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-246000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-246000/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-246000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 15:33:27.396613    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 15:33:27.396653    5584 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17777-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17777-1259/.minikube}
	I1212 15:33:27.396695    5584 buildroot.go:174] setting up certificates
	I1212 15:33:27.396709    5584 provision.go:83] configureAuth start
	I1212 15:33:27.396718    5584 main.go:141] libmachine: (calico-246000) Calling .GetMachineName
	I1212 15:33:27.396869    5584 main.go:141] libmachine: (calico-246000) Calling .GetIP
	I1212 15:33:27.396978    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.397065    5584 provision.go:138] copyHostCerts
	I1212 15:33:27.397142    5584 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem, removing ...
	I1212 15:33:27.397151    5584 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem
	I1212 15:33:27.397289    5584 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/cert.pem (1123 bytes)
	I1212 15:33:27.397546    5584 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem, removing ...
	I1212 15:33:27.397553    5584 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem
	I1212 15:33:27.397628    5584 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/key.pem (1675 bytes)
	I1212 15:33:27.397801    5584 exec_runner.go:144] found /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem, removing ...
	I1212 15:33:27.397807    5584 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem
	I1212 15:33:27.397877    5584 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.pem (1082 bytes)
	I1212 15:33:27.398030    5584 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca-key.pem org=jenkins.calico-246000 san=[192.169.0.33 192.169.0.33 localhost 127.0.0.1 minikube calico-246000]
	I1212 15:33:27.481486    5584 provision.go:172] copyRemoteCerts
	I1212 15:33:27.481550    5584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 15:33:27.481570    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.481760    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.481865    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.481972    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.482076    5584 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/id_rsa Username:docker}
	I1212 15:33:27.516836    5584 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 15:33:27.533653    5584 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 15:33:27.549912    5584 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 15:33:27.566016    5584 provision.go:86] duration metric: configureAuth took 169.295763ms
	I1212 15:33:27.566031    5584 buildroot.go:189] setting minikube options for container-runtime
	I1212 15:33:27.566175    5584 config.go:182] Loaded profile config "calico-246000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:33:27.566190    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:27.566339    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.566443    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.566533    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.566602    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.566672    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.566776    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:27.567017    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:27.567025    5584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 15:33:27.625389    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 15:33:27.625409    5584 buildroot.go:70] root file system type: tmpfs
	I1212 15:33:27.625490    5584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 15:33:27.625509    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.625658    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.625781    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.625879    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.625977    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.626095    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:27.626355    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:27.626405    5584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 15:33:27.692254    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 15:33:27.692277    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:27.692435    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:27.692566    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.692680    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:27.692817    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:27.693039    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:27.693307    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:27.693320    5584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 15:33:28.290527    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 15:33:28.290546    5584 main.go:141] libmachine: Checking connection to Docker...
	I1212 15:33:28.290555    5584 main.go:141] libmachine: (calico-246000) Calling .GetURL
	I1212 15:33:28.290717    5584 main.go:141] libmachine: Docker is up and running!
	I1212 15:33:28.290726    5584 main.go:141] libmachine: Reticulating splines...
	I1212 15:33:28.290735    5584 client.go:171] LocalClient.Create took 12.043638313s
	I1212 15:33:28.290746    5584 start.go:167] duration metric: libmachine.API.Create for "calico-246000" took 12.043686675s
	I1212 15:33:28.290754    5584 start.go:300] post-start starting for "calico-246000" (driver="hyperkit")
	I1212 15:33:28.290762    5584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 15:33:28.290772    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:28.290949    5584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 15:33:28.290961    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:28.291073    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:28.291208    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:28.291320    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:28.291419    5584 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/id_rsa Username:docker}
	I1212 15:33:28.326541    5584 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 15:33:28.329422    5584 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 15:33:28.329439    5584 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/addons for local assets ...
	I1212 15:33:28.329552    5584 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17777-1259/.minikube/files for local assets ...
	I1212 15:33:28.329736    5584 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem -> 17202.pem in /etc/ssl/certs
	I1212 15:33:28.329936    5584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 15:33:28.336675    5584 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/ssl/certs/17202.pem --> /etc/ssl/certs/17202.pem (1708 bytes)
	I1212 15:33:28.352916    5584 start.go:303] post-start completed in 62.153856ms
	I1212 15:33:28.352953    5584 main.go:141] libmachine: (calico-246000) Calling .GetConfigRaw
	I1212 15:33:28.353558    5584 main.go:141] libmachine: (calico-246000) Calling .GetIP
	I1212 15:33:28.353720    5584 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/calico-246000/config.json ...
	I1212 15:33:28.354073    5584 start.go:128] duration metric: createHost completed in 12.138506836s
	I1212 15:33:28.354089    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:28.354200    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:28.354298    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:28.354406    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:28.354510    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:28.354624    5584 main.go:141] libmachine: Using SSH client type: native
	I1212 15:33:28.354873    5584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 15:33:28.354881    5584 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 15:33:28.412968    5584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702424008.494998639
	
	I1212 15:33:28.412980    5584 fix.go:206] guest clock: 1702424008.494998639
	I1212 15:33:28.412986    5584 fix.go:219] Guest: 2023-12-12 15:33:28.494998639 -0800 PST Remote: 2023-12-12 15:33:28.354082 -0800 PST m=+12.816878675 (delta=140.916639ms)
	I1212 15:33:28.413009    5584 fix.go:190] guest clock delta is within tolerance: 140.916639ms
	I1212 15:33:28.413014    5584 start.go:83] releasing machines lock for "calico-246000", held for 12.197523558s
	I1212 15:33:28.413033    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:28.413165    5584 main.go:141] libmachine: (calico-246000) Calling .GetIP
	I1212 15:33:28.413260    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:28.413565    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:28.413690    5584 main.go:141] libmachine: (calico-246000) Calling .DriverName
	I1212 15:33:28.413790    5584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 15:33:28.413833    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:28.413918    5584 ssh_runner.go:195] Run: cat /version.json
	I1212 15:33:28.413934    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHHostname
	I1212 15:33:28.413954    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:28.414120    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHPort
	I1212 15:33:28.414139    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:28.414241    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:28.414261    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHKeyPath
	I1212 15:33:28.414372    5584 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/id_rsa Username:docker}
	I1212 15:33:28.414379    5584 main.go:141] libmachine: (calico-246000) Calling .GetSSHUsername
	I1212 15:33:28.414507    5584 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/calico-246000/id_rsa Username:docker}
	I1212 15:33:28.495285    5584 ssh_runner.go:195] Run: systemctl --version
	I1212 15:33:28.499118    5584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 15:33:28.502663    5584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 15:33:28.503150    5584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 15:33:28.514695    5584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 15:33:28.514717    5584 start.go:475] detecting cgroup driver to use...
	I1212 15:33:28.514828    5584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:33:28.527731    5584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 15:33:28.534996    5584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 15:33:28.542400    5584 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 15:33:28.542463    5584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 15:33:28.550068    5584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:33:28.557646    5584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 15:33:28.564806    5584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 15:33:28.572154    5584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 15:33:28.579900    5584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 15:33:28.587365    5584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 15:33:28.594119    5584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 15:33:28.600971    5584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:33:28.685562    5584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 15:33:28.696911    5584 start.go:475] detecting cgroup driver to use...
	I1212 15:33:28.696980    5584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 15:33:28.706966    5584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:33:28.715850    5584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 15:33:28.728164    5584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 15:33:28.736859    5584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:33:28.746140    5584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 15:33:28.768855    5584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 15:33:28.778723    5584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 15:33:28.791342    5584 ssh_runner.go:195] Run: which cri-dockerd
	I1212 15:33:28.793852    5584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 15:33:28.800558    5584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 15:33:28.811930    5584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 15:33:28.909197    5584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 15:33:29.004486    5584 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 15:33:29.004574    5584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 15:33:29.016893    5584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:33:29.113689    5584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 15:33:30.434889    5584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.32118611s)
	I1212 15:33:30.434957    5584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:33:30.519121    5584 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 15:33:30.614568    5584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 15:33:30.717677    5584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 15:33:30.818314    5584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 15:33:30.830073    5584 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 15:33:30.861186    5584 out.go:177] 
	W1212 15:33:30.886296    5584 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:33:24 UTC, ends at Tue 2023-12-12 23:33:30 UTC. --
	Dec 12 23:33:25 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:33:25 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:33:28 calico-246000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:33:30 calico-246000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:33:30 calico-246000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:33:30 calico-246000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:33:30 calico-246000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 23:33:30 calico-246000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:33:24 UTC, ends at Tue 2023-12-12 23:33:30 UTC. --
	Dec 12 23:33:25 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:33:25 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:33:28 calico-246000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 23:33:28 calico-246000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 23:33:30 calico-246000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 23:33:30 calico-246000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 23:33:30 calico-246000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 23:33:30 calico-246000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 23:33:30 calico-246000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 15:33:30.886328    5584 out.go:239] * 
	* 
	W1212 15:33:30.887570    5584 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 15:33:30.949877    5584 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/calico/Start (15.47s)

                                                
                                    

Test pass (287/323)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.98
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.28.4/json-events 9.1
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.33
17 TestDownloadOnly/v1.29.0-rc.2/json-events 10.84
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.32
23 TestDownloadOnly/DeleteAll 0.4
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.37
26 TestBinaryMirror 1.01
27 TestOffline 54.43
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
32 TestAddons/Setup 132.77
34 TestAddons/parallel/Registry 14.53
35 TestAddons/parallel/Ingress 20.93
36 TestAddons/parallel/InspektorGadget 10.52
37 TestAddons/parallel/MetricsServer 5.6
38 TestAddons/parallel/HelmTiller 10.37
40 TestAddons/parallel/CSI 59.49
41 TestAddons/parallel/Headlamp 14.12
42 TestAddons/parallel/CloudSpanner 5.42
43 TestAddons/parallel/LocalPath 10.29
44 TestAddons/parallel/NvidiaDevicePlugin 5.38
47 TestAddons/serial/GCPAuth/Namespaces 0.1
48 TestAddons/StoppedEnableDisable 5.77
49 TestCertOptions 43.64
50 TestCertExpiration 243.95
51 TestDockerFlags 42.38
52 TestForceSystemdFlag 40.78
53 TestForceSystemdEnv 39.6
56 TestHyperKitDriverInstallOrUpdate 7.37
59 TestErrorSpam/setup 34.26
60 TestErrorSpam/start 1.53
61 TestErrorSpam/status 0.48
62 TestErrorSpam/pause 1.27
63 TestErrorSpam/unpause 1.32
64 TestErrorSpam/stop 5.68
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 87.83
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 39.24
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.3
76 TestFunctional/serial/CacheCmd/cache/add_local 1.46
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.18
81 TestFunctional/serial/CacheCmd/cache/delete 0.16
82 TestFunctional/serial/MinikubeKubectlCmd 0.54
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.79
84 TestFunctional/serial/ExtraConfig 35.37
85 TestFunctional/serial/ComponentHealth 0.05
86 TestFunctional/serial/LogsCmd 2.9
87 TestFunctional/serial/LogsFileCmd 2.88
88 TestFunctional/serial/InvalidService 4.05
90 TestFunctional/parallel/ConfigCmd 0.52
91 TestFunctional/parallel/DashboardCmd 10.71
92 TestFunctional/parallel/DryRun 1.03
93 TestFunctional/parallel/InternationalLanguage 0.65
94 TestFunctional/parallel/StatusCmd 0.53
98 TestFunctional/parallel/ServiceCmdConnect 8.56
99 TestFunctional/parallel/AddonsCmd 0.27
100 TestFunctional/parallel/PersistentVolumeClaim 27.25
102 TestFunctional/parallel/SSHCmd 0.31
103 TestFunctional/parallel/CpCmd 1.23
104 TestFunctional/parallel/MySQL 25.51
105 TestFunctional/parallel/FileSync 0.23
106 TestFunctional/parallel/CertSync 1.36
110 TestFunctional/parallel/NodeLabels 0.05
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
114 TestFunctional/parallel/License 0.54
115 TestFunctional/parallel/Version/short 0.1
116 TestFunctional/parallel/Version/components 0.49
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.26
122 TestFunctional/parallel/ImageCommands/Setup 2.43
123 TestFunctional/parallel/DockerEnv/bash 0.85
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.22
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.16
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.24
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.26
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.36
134 TestFunctional/parallel/ServiceCmd/DeployApp 13.14
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.17
140 TestFunctional/parallel/ServiceCmd/List 0.38
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
143 TestFunctional/parallel/ServiceCmd/Format 0.27
144 TestFunctional/parallel/ServiceCmd/URL 0.29
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
152 TestFunctional/parallel/ProfileCmd/profile_list 0.29
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
154 TestFunctional/parallel/MountCmd/any-port 6.19
155 TestFunctional/parallel/MountCmd/specific-port 1.37
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
157 TestFunctional/delete_addon-resizer_images 0.21
158 TestFunctional/delete_my-image_image 0.05
159 TestFunctional/delete_minikube_cached_images 0.05
163 TestImageBuild/serial/Setup 37.15
164 TestImageBuild/serial/NormalBuild 1.18
165 TestImageBuild/serial/BuildWithBuildArg 0.73
166 TestImageBuild/serial/BuildWithDockerIgnore 0.24
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.27
170 TestIngressAddonLegacy/StartLegacyK8sCluster 101.9
172 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.81
173 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
174 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.76
177 TestJSONOutput/start/Command 47.29
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.45
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.41
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 8.17
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.76
205 TestMainNoArgs 0.08
206 TestMinikubeProfile 85.24
209 TestMountStart/serial/StartWithMountFirst 16.37
210 TestMountStart/serial/VerifyMountFirst 0.31
211 TestMountStart/serial/StartWithMountSecond 16.22
212 TestMountStart/serial/VerifyMountSecond 0.29
213 TestMountStart/serial/DeleteFirst 2.37
214 TestMountStart/serial/VerifyMountPostDelete 0.29
215 TestMountStart/serial/Stop 2.22
216 TestMountStart/serial/RestartStopped 16.15
217 TestMountStart/serial/VerifyMountPostStop 0.31
229 TestMultiNode/serial/RestartKeepsNodes 67.59
237 TestPreload 148.96
239 TestScheduledStopUnix 105.62
240 TestSkaffold 110.55
243 TestRunningBinaryUpgrade 185.09
245 TestKubernetesUpgrade 151.66
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.37
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.67
260 TestStoppedBinaryUpgrade/Setup 1.61
261 TestStoppedBinaryUpgrade/Upgrade 153.67
263 TestPause/serial/Start 49.95
264 TestStoppedBinaryUpgrade/MinikubeLogs 2.6
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.58
274 TestNoKubernetes/serial/StartWithK8s 37.58
275 TestPause/serial/SecondStartNoReconfiguration 35.19
276 TestNoKubernetes/serial/StartWithStopK8s 16.33
277 TestNoKubernetes/serial/Start 18.27
278 TestPause/serial/Pause 0.51
279 TestPause/serial/VerifyStatus 0.17
280 TestPause/serial/Unpause 0.53
281 TestPause/serial/PauseAgain 0.63
282 TestPause/serial/DeletePaused 5.27
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
284 TestNoKubernetes/serial/ProfileList 28.87
285 TestPause/serial/VerifyDeletedResources 0.27
286 TestNetworkPlugins/group/auto/Start 52.54
287 TestNoKubernetes/serial/Stop 2.25
288 TestNoKubernetes/serial/StartNoArgs 17.15
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
290 TestNetworkPlugins/group/kindnet/Start 59.1
291 TestNetworkPlugins/group/auto/KubeletFlags 0.17
292 TestNetworkPlugins/group/auto/NetCatPod 12.17
293 TestNetworkPlugins/group/auto/DNS 0.15
294 TestNetworkPlugins/group/auto/Localhost 0.11
295 TestNetworkPlugins/group/auto/HairPin 0.11
297 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
299 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
300 TestNetworkPlugins/group/custom-flannel/Start 59.51
301 TestNetworkPlugins/group/kindnet/DNS 0.15
302 TestNetworkPlugins/group/kindnet/Localhost 0.1
303 TestNetworkPlugins/group/kindnet/HairPin 0.11
304 TestNetworkPlugins/group/false/Start 48.27
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.17
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.18
307 TestNetworkPlugins/group/custom-flannel/DNS 0.13
308 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
309 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
310 TestNetworkPlugins/group/false/KubeletFlags 0.15
311 TestNetworkPlugins/group/false/NetCatPod 11.18
312 TestNetworkPlugins/group/false/DNS 0.15
313 TestNetworkPlugins/group/false/Localhost 0.1
314 TestNetworkPlugins/group/false/HairPin 0.1
315 TestNetworkPlugins/group/enable-default-cni/Start 51.67
316 TestNetworkPlugins/group/flannel/Start 59.27
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.18
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
322 TestNetworkPlugins/group/flannel/ControllerPod 5.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
324 TestNetworkPlugins/group/flannel/NetCatPod 11.17
325 TestNetworkPlugins/group/bridge/Start 88.81
326 TestNetworkPlugins/group/flannel/DNS 0.12
327 TestNetworkPlugins/group/flannel/Localhost 0.1
328 TestNetworkPlugins/group/flannel/HairPin 0.1
329 TestNetworkPlugins/group/kubenet/Start 52.45
330 TestNetworkPlugins/group/kubenet/KubeletFlags 0.16
331 TestNetworkPlugins/group/kubenet/NetCatPod 10.17
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
333 TestNetworkPlugins/group/bridge/NetCatPod 11.2
334 TestNetworkPlugins/group/kubenet/DNS 0.13
335 TestNetworkPlugins/group/kubenet/Localhost 0.12
336 TestNetworkPlugins/group/kubenet/HairPin 0.1
337 TestNetworkPlugins/group/bridge/DNS 0.13
338 TestNetworkPlugins/group/bridge/Localhost 0.11
339 TestNetworkPlugins/group/bridge/HairPin 0.11
341 TestStartStop/group/old-k8s-version/serial/FirstStart 152.54
343 TestStartStop/group/no-preload/serial/FirstStart 66.54
344 TestStartStop/group/no-preload/serial/DeployApp 9.54
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
346 TestStartStop/group/no-preload/serial/Stop 8.3
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
348 TestStartStop/group/no-preload/serial/SecondStart 301.16
349 TestStartStop/group/old-k8s-version/serial/DeployApp 10.28
350 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.7
351 TestStartStop/group/old-k8s-version/serial/Stop 8.26
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
353 TestStartStop/group/old-k8s-version/serial/SecondStart 457.23
354 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
357 TestStartStop/group/no-preload/serial/Pause 1.95
359 TestStartStop/group/embed-certs/serial/FirstStart 49.34
360 TestStartStop/group/embed-certs/serial/DeployApp 9.26
361 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
362 TestStartStop/group/embed-certs/serial/Stop 8.23
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
364 TestStartStop/group/embed-certs/serial/SecondStart 300.29
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.17
368 TestStartStop/group/old-k8s-version/serial/Pause 1.8
370 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.88
371 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
373 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.26
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 296.23
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
378 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
379 TestStartStop/group/embed-certs/serial/Pause 1.95
381 TestStartStop/group/newest-cni/serial/FirstStart 47.23
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
384 TestStartStop/group/newest-cni/serial/Stop 8.3
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
386 TestStartStop/group/newest-cni/serial/SecondStart 37.62
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
390 TestStartStop/group/newest-cni/serial/Pause 1.76
391 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
392 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
393 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.17
394 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.89
x
+
TestDownloadOnly/v1.16.0/json-events (18.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-334000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-334000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (18.980228088s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-334000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-334000: exit status 85 (290.526364ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-334000 | jenkins | v1.32.0 | 12 Dec 23 14:53 PST |          |
	|         | -p download-only-334000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 14:53:56
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 14:53:56.574199    1722 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:53:56.574415    1722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:53:56.574421    1722 out.go:309] Setting ErrFile to fd 2...
	I1212 14:53:56.574426    1722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:53:56.574600    1722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	W1212 14:53:56.574695    1722 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17777-1259/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17777-1259/.minikube/config/config.json: no such file or directory
	I1212 14:53:56.576402    1722 out.go:303] Setting JSON to true
	I1212 14:53:56.601689    1722 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1407,"bootTime":1702420229,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 14:53:56.601914    1722 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:53:56.623517    1722 out.go:97] [download-only-334000] minikube v1.32.0 on Darwin 14.2
	I1212 14:53:56.645454    1722 out.go:169] MINIKUBE_LOCATION=17777
	I1212 14:53:56.623762    1722 notify.go:220] Checking for updates...
	W1212 14:53:56.623777    1722 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 14:53:56.689387    1722 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 14:53:56.710527    1722 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:53:56.731105    1722 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:53:56.752254    1722 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	W1212 14:53:56.794071    1722 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 14:53:56.794492    1722 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 14:53:56.847368    1722 out.go:97] Using the hyperkit driver based on user configuration
	I1212 14:53:56.847428    1722 start.go:298] selected driver: hyperkit
	I1212 14:53:56.847441    1722 start.go:902] validating driver "hyperkit" against <nil>
	I1212 14:53:56.847684    1722 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:53:56.848023    1722 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 14:53:56.985885    1722 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 14:53:56.990321    1722 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 14:53:56.990418    1722 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 14:53:56.990445    1722 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 14:53:56.995398    1722 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1212 14:53:56.995563    1722 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 14:53:56.995630    1722 cni.go:84] Creating CNI manager for ""
	I1212 14:53:56.995645    1722 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1212 14:53:56.995654    1722 start_flags.go:323] config:
	{Name:download-only-334000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-334000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:53:56.995913    1722 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:53:57.017542    1722 out.go:97] Downloading VM boot image ...
	I1212 14:53:57.017629    1722 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 14:54:01.765024    1722 out.go:97] Starting control plane node download-only-334000 in cluster download-only-334000
	I1212 14:54:01.765056    1722 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 14:54:01.822791    1722 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 14:54:01.822853    1722 cache.go:56] Caching tarball of preloaded images
	I1212 14:54:01.823211    1722 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 14:54:01.844394    1722 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 14:54:01.844424    1722 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:54:01.927421    1722 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 14:54:08.220345    1722 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:54:08.220515    1722 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:54:08.763016    1722 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1212 14:54:08.763255    1722 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/download-only-334000/config.json ...
	I1212 14:54:08.763278    1722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/download-only-334000/config.json: {Name:mk04fcd8c4a8b71298ae36ef445985a8252921e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:54:08.763534    1722 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 14:54:08.763825    1722 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-334000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-334000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-334000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit : (9.103854292s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-334000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-334000: exit status 85 (332.844738ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-334000 | jenkins | v1.32.0 | 12 Dec 23 14:53 PST |          |
	|         | -p download-only-334000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-334000 | jenkins | v1.32.0 | 12 Dec 23 14:54 PST |          |
	|         | -p download-only-334000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 14:54:15
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 14:54:15.847241    1735 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:54:15.847457    1735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:54:15.847462    1735 out.go:309] Setting ErrFile to fd 2...
	I1212 14:54:15.847466    1735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:54:15.847638    1735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	W1212 14:54:15.847731    1735 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17777-1259/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17777-1259/.minikube/config/config.json: no such file or directory
	I1212 14:54:15.848963    1735 out.go:303] Setting JSON to true
	I1212 14:54:15.871053    1735 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1426,"bootTime":1702420229,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 14:54:15.871143    1735 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:54:15.892223    1735 out.go:97] [download-only-334000] minikube v1.32.0 on Darwin 14.2
	I1212 14:54:15.913926    1735 out.go:169] MINIKUBE_LOCATION=17777
	I1212 14:54:15.892398    1735 notify.go:220] Checking for updates...
	I1212 14:54:15.955873    1735 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 14:54:15.976933    1735 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:54:15.999929    1735 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:54:16.021039    1735 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	W1212 14:54:16.062894    1735 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 14:54:16.063360    1735 config.go:182] Loaded profile config "download-only-334000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1212 14:54:16.063425    1735 start.go:810] api.Load failed for download-only-334000: filestore "download-only-334000": Docker machine "download-only-334000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:54:16.063538    1735 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 14:54:16.063566    1735 start.go:810] api.Load failed for download-only-334000: filestore "download-only-334000": Docker machine "download-only-334000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:54:16.091918    1735 out.go:97] Using the hyperkit driver based on existing profile
	I1212 14:54:16.091950    1735 start.go:298] selected driver: hyperkit
	I1212 14:54:16.091960    1735 start.go:902] validating driver "hyperkit" against &{Name:download-only-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-334000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:54:16.092195    1735 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:54:16.092323    1735 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 14:54:16.100385    1735 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 14:54:16.104152    1735 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 14:54:16.104181    1735 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 14:54:16.106887    1735 cni.go:84] Creating CNI manager for ""
	I1212 14:54:16.106908    1735 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 14:54:16.106922    1735 start_flags.go:323] config:
	{Name:download-only-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-334000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:54:16.107056    1735 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:54:16.127751    1735 out.go:97] Starting control plane node download-only-334000 in cluster download-only-334000
	I1212 14:54:16.127770    1735 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:54:16.183829    1735 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 14:54:16.183867    1735 cache.go:56] Caching tarball of preloaded images
	I1212 14:54:16.184203    1735 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:54:16.205689    1735 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 14:54:16.205718    1735 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:54:16.293338    1735 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-334000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-334000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-334000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperkit : (10.84064212s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-334000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-334000: exit status 85 (316.935735ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-334000 | jenkins | v1.32.0 | 12 Dec 23 14:53 PST |          |
	|         | -p download-only-334000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-334000 | jenkins | v1.32.0 | 12 Dec 23 14:54 PST |          |
	|         | -p download-only-334000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-334000 | jenkins | v1.32.0 | 12 Dec 23 14:54 PST |          |
	|         | -p download-only-334000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 14:54:25
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 14:54:25.285216    1748 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:54:25.285489    1748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:54:25.285495    1748 out.go:309] Setting ErrFile to fd 2...
	I1212 14:54:25.285499    1748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:54:25.285679    1748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	W1212 14:54:25.285786    1748 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17777-1259/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17777-1259/.minikube/config/config.json: no such file or directory
	I1212 14:54:25.287071    1748 out.go:303] Setting JSON to true
	I1212 14:54:25.309210    1748 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1436,"bootTime":1702420229,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 14:54:25.309302    1748 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:54:25.332295    1748 out.go:97] [download-only-334000] minikube v1.32.0 on Darwin 14.2
	I1212 14:54:25.332527    1748 notify.go:220] Checking for updates...
	I1212 14:54:25.353947    1748 out.go:169] MINIKUBE_LOCATION=17777
	I1212 14:54:25.377220    1748 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 14:54:25.399099    1748 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:54:25.419974    1748 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:54:25.441056    1748 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	W1212 14:54:25.482834    1748 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 14:54:25.483610    1748 config.go:182] Loaded profile config "download-only-334000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1212 14:54:25.483695    1748 start.go:810] api.Load failed for download-only-334000: filestore "download-only-334000": Docker machine "download-only-334000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:54:25.483861    1748 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 14:54:25.483911    1748 start.go:810] api.Load failed for download-only-334000: filestore "download-only-334000": Docker machine "download-only-334000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:54:25.513984    1748 out.go:97] Using the hyperkit driver based on existing profile
	I1212 14:54:25.514039    1748 start.go:298] selected driver: hyperkit
	I1212 14:54:25.514052    1748 start.go:902] validating driver "hyperkit" against &{Name:download-only-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-334000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:54:25.514392    1748 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:54:25.514565    1748 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17777-1259/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 14:54:25.523886    1748 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 14:54:25.527708    1748 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 14:54:25.527727    1748 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 14:54:25.530473    1748 cni.go:84] Creating CNI manager for ""
	I1212 14:54:25.530494    1748 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 14:54:25.530508    1748 start_flags.go:323] config:
	{Name:download-only-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-334000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:54:25.530641    1748 iso.go:125] acquiring lock: {Name:mk96a55b7848c6dd3321ed62339797ab51ac6b5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:54:25.555944    1748 out.go:97] Starting control plane node download-only-334000 in cluster download-only-334000
	I1212 14:54:25.556002    1748 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 14:54:25.609006    1748 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 14:54:25.609041    1748 cache.go:56] Caching tarball of preloaded images
	I1212 14:54:25.609409    1748 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 14:54:25.630565    1748 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 14:54:25.630581    1748 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:54:25.710347    1748 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 14:54:33.832243    1748 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:54:33.832422    1748 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:54:34.370754    1748 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1212 14:54:34.370837    1748 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/download-only-334000/config.json ...
	I1212 14:54:34.371224    1748 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 14:54:34.371439    1748 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17777-1259/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-334000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-334000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-704000 --alsologtostderr --binary-mirror http://127.0.0.1:49353 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-704000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestOffline (54.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-542000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-542000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (49.103753795s)
helpers_test.go:175: Cleaning up "offline-docker-542000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-542000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-542000: (5.328385978s)
--- PASS: TestOffline (54.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-609000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-609000: exit status 85 (207.732428ms)

                                                
                                                
-- stdout --
	* Profile "addons-609000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-609000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-609000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-609000: exit status 85 (187.323937ms)

                                                
                                                
-- stdout --
	* Profile "addons-609000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-609000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (132.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-609000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-609000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m12.773736494s)
--- PASS: TestAddons/Setup (132.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 10.5478ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-p2z2d" [5d155038-6c9d-4a0a-a946-e57cd60e89cb] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010868066s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qfpbv" [3ded0fc9-1067-409c-8624-196dc58e60c1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011537849s
addons_test.go:339: (dbg) Run:  kubectl --context addons-609000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-609000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-609000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.880346887s)
addons_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 ip
2023/12/12 14:57:05 [DEBUG] GET http://192.169.0.3:5000
addons_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-609000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-609000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-609000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [40cfdb97-dc06-471b-9292-db3340965c4e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [40cfdb97-dc06-471b-9292-db3340965c4e] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.01943658s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-609000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.169.0.3
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-amd64 -p addons-609000 addons disable ingress-dns --alsologtostderr -v=1: (1.094737898s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p addons-609000 addons disable ingress --alsologtostderr -v=1: (7.6469838s)
--- PASS: TestAddons/parallel/Ingress (20.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pgrqk" [b34e318d-6eb2-4903-a122-a406ec3dede9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010410275s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-609000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-609000: (5.50574922s)
--- PASS: TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 2.791809ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-cpch6" [88be62eb-8c54-4d32-a8f9-d4b779638e04] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009117621s
addons_test.go:414: (dbg) Run:  kubectl --context addons-609000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.37s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 2.617461ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-5ctjv" [fa258e63-fdcc-44b7-93e2-c10e77fe24b5] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013501214s
addons_test.go:472: (dbg) Run:  kubectl --context addons-609000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-609000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.887299462s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 11.16161ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-609000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-609000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [31300f93-ad05-43b9-93ff-5436e1abf8a1] Pending
helpers_test.go:344: "task-pv-pod" [31300f93-ad05-43b9-93ff-5436e1abf8a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [31300f93-ad05-43b9-93ff-5436e1abf8a1] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.016510556s
addons_test.go:583: (dbg) Run:  kubectl --context addons-609000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-609000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-609000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-609000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-609000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-609000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-609000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-609000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fdefa515-dd3a-4fa6-9f09-1eb42ba6549b] Pending
helpers_test.go:344: "task-pv-pod-restore" [fdefa515-dd3a-4fa6-9f09-1eb42ba6549b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fdefa515-dd3a-4fa6-9f09-1eb42ba6549b] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.013216784s
addons_test.go:625: (dbg) Run:  kubectl --context addons-609000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-609000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-609000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-609000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.52467269s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-609000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-609000 --alsologtostderr -v=1: (1.113946857s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-tc4cf" [916a85cc-b71f-48e4-8007-b0c59fdd72da] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-tc4cf" [916a85cc-b71f-48e4-8007-b0c59fdd72da] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.009523636s
--- PASS: TestAddons/parallel/Headlamp (14.12s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-xs2gf" [991bd066-bdb5-4337-9752-7a3b797071c7] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008940192s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-609000
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-609000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-609000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-609000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [40820aa5-803b-49c3-9588-8f2c87675a6f] Pending
helpers_test.go:344: "test-local-path" [40820aa5-803b-49c3-9588-8f2c87675a6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [40820aa5-803b-49c3-9588-8f2c87675a6f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [40820aa5-803b-49c3-9588-8f2c87675a6f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.008926433s
addons_test.go:890: (dbg) Run:  kubectl --context addons-609000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 ssh "cat /opt/local-path-provisioner/pvc-14737872-887c-4214-a658-261f0058589a_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-609000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-609000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-609000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5xw4z" [1661efb0-83f8-45ff-82dc-27fe50657fde] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.008873186s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-609000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-609000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-609000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-609000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-609000: (5.236044997s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-609000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-609000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-609000
--- PASS: TestAddons/StoppedEnableDisable (5.77s)

                                                
                                    
x
+
TestCertOptions (43.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-027000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-027000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (38.006827525s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-027000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-027000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-027000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-027000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-027000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-027000: (5.288996137s)
--- PASS: TestCertOptions (43.64s)

                                                
                                    
x
+
TestCertExpiration (243.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E1212 15:23:51.179502    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (35.429791509s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E1212 15:27:28.112662    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:27:39.689613    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (23.203298645s)
helpers_test.go:175: Cleaning up "cert-expiration-565000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-565000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-565000: (5.311692009s)
--- PASS: TestCertExpiration (243.95s)

                                                
                                    
x
+
TestDockerFlags (42.38s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-277000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-277000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (38.645155991s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-277000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-277000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-277000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-277000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-277000: (3.408061562s)
--- PASS: TestDockerFlags (42.38s)

                                                
                                    
x
+
TestForceSystemdFlag (40.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-059000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-059000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (35.313424897s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-059000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-059000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-059000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-059000: (5.290624s)
--- PASS: TestForceSystemdFlag (40.78s)

                                                
                                    
x
+
TestForceSystemdEnv (39.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-181000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-181000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (34.03413153s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-181000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-181000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-181000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-181000: (5.38087651s)
--- PASS: TestForceSystemdEnv (39.60s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.37s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.37s)

                                                
                                    
x
+
TestErrorSpam/setup (34.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-362000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-362000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 --driver=hyperkit : (34.261755074s)
--- PASS: TestErrorSpam/setup (34.26s)

                                                
                                    
x
+
TestErrorSpam/start (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 start --dry-run
--- PASS: TestErrorSpam/start (1.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.48s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 status
--- PASS: TestErrorSpam/status (0.48s)

                                                
                                    
x
+
TestErrorSpam/pause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 pause
--- PASS: TestErrorSpam/pause (1.27s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (5.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 stop: (5.249334063s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-362000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-362000 stop
--- PASS: TestErrorSpam/stop (5.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17777-1259/.minikube/files/etc/test/nested/copy/1720/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-004000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-004000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m27.829396585s)
--- PASS: TestFunctional/serial/StartWithProxy (87.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-004000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-004000 --alsologtostderr -v=8: (39.235459685s)
functional_test.go:659: soft start took 39.235986717s for "functional-004000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-004000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 cache add registry.k8s.io/pause:3.1: (1.229673084s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 cache add registry.k8s.io/pause:3.3: (1.109450338s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2224670735/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cache add minikube-local-cache-test:functional-004000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cache delete minikube-local-cache-test:functional-004000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-004000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (157.433352ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 kubectl -- --context functional-004000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-004000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.79s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-004000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 15:01:51.614421    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:51.621378    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:51.632794    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:51.653753    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:51.693990    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:51.774514    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:51.935394    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:52.257621    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:52.899007    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:54.180231    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:01:56.741711    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:02:01.863092    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:02:12.105122    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-004000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.368081284s)
functional_test.go:757: restart took 35.368223016s for "functional-004000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-004000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 logs: (2.903411325s)
--- PASS: TestFunctional/serial/LogsCmd (2.90s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3734917180/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3734917180/001/logs.txt: (2.883581021s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-004000 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-004000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-004000: exit status 115 (275.628751ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.5:31291 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-004000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 config get cpus: exit status 14 (92.634006ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 config get cpus: exit status 14 (56.173998ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-004000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-004000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2990: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-004000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-004000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (560.770928ms)

                                                
                                                
-- stdout --
	* [functional-004000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17777
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:03:23.850835    2943 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:03:23.851062    2943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:03:23.851068    2943 out.go:309] Setting ErrFile to fd 2...
	I1212 15:03:23.851072    2943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:03:23.851263    2943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:03:23.852685    2943 out.go:303] Setting JSON to false
	I1212 15:03:23.875330    2943 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1974,"bootTime":1702420229,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 15:03:23.875446    2943 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:03:23.912965    2943 out.go:177] * [functional-004000] minikube v1.32.0 on Darwin 14.2
	I1212 15:03:23.954703    2943 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 15:03:23.954812    2943 notify.go:220] Checking for updates...
	I1212 15:03:23.997619    2943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:03:24.071491    2943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:03:24.113560    2943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:03:24.134342    2943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:03:24.155594    2943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:03:24.177537    2943 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:03:24.178230    2943 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:03:24.178323    2943 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:03:24.187533    2943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50544
	I1212 15:03:24.188068    2943 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:03:24.188487    2943 main.go:141] libmachine: Using API Version  1
	I1212 15:03:24.188497    2943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:03:24.188700    2943 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:03:24.188856    2943 main.go:141] libmachine: (functional-004000) Calling .DriverName
	I1212 15:03:24.189082    2943 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:03:24.189342    2943 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:03:24.189364    2943 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:03:24.197246    2943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50546
	I1212 15:03:24.197603    2943 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:03:24.198002    2943 main.go:141] libmachine: Using API Version  1
	I1212 15:03:24.198020    2943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:03:24.198241    2943 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:03:24.198353    2943 main.go:141] libmachine: (functional-004000) Calling .DriverName
	I1212 15:03:24.226399    2943 out.go:177] * Using the hyperkit driver based on existing profile
	I1212 15:03:24.247550    2943 start.go:298] selected driver: hyperkit
	I1212 15:03:24.247586    2943 start.go:902] validating driver "hyperkit" against &{Name:functional-004000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-004000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:03:24.247841    2943 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:03:24.273612    2943 out.go:177] 
	W1212 15:03:24.294432    2943 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 15:03:24.315609    2943 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-004000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-004000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-004000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (653.493234ms)

                                                
                                                
-- stdout --
	* [functional-004000] minikube v1.32.0 sur Darwin 14.2
	  - MINIKUBE_LOCATION=17777
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:03:24.800666    2959 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:03:24.801024    2959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:03:24.801029    2959 out.go:309] Setting ErrFile to fd 2...
	I1212 15:03:24.801033    2959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:03:24.801275    2959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
	I1212 15:03:24.822200    2959 out.go:303] Setting JSON to false
	I1212 15:03:24.846153    2959 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1975,"bootTime":1702420229,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 15:03:24.846250    2959 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:03:24.866754    2959 out.go:177] * [functional-004000] minikube v1.32.0 sur Darwin 14.2
	I1212 15:03:24.929776    2959 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 15:03:24.908982    2959 notify.go:220] Checking for updates...
	I1212 15:03:24.971718    2959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	I1212 15:03:24.992737    2959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:03:25.013535    2959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:03:25.055666    2959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	I1212 15:03:25.118845    2959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:03:25.161493    2959 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:03:25.162152    2959 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:03:25.162237    2959 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:03:25.171177    2959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50566
	I1212 15:03:25.171556    2959 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:03:25.172027    2959 main.go:141] libmachine: Using API Version  1
	I1212 15:03:25.172037    2959 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:03:25.172286    2959 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:03:25.172412    2959 main.go:141] libmachine: (functional-004000) Calling .DriverName
	I1212 15:03:25.172611    2959 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:03:25.172859    2959 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 15:03:25.172890    2959 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 15:03:25.181150    2959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50568
	I1212 15:03:25.181527    2959 main.go:141] libmachine: () Calling .GetVersion
	I1212 15:03:25.181882    2959 main.go:141] libmachine: Using API Version  1
	I1212 15:03:25.181901    2959 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 15:03:25.182131    2959 main.go:141] libmachine: () Calling .GetMachineName
	I1212 15:03:25.182266    2959 main.go:141] libmachine: (functional-004000) Calling .DriverName
	I1212 15:03:25.210594    2959 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1212 15:03:25.252548    2959 start.go:298] selected driver: hyperkit
	I1212 15:03:25.252561    2959 start.go:902] validating driver "hyperkit" against &{Name:functional-004000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-004000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:03:25.252677    2959 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:03:25.276742    2959 out.go:177] 
	W1212 15:03:25.297895    2959 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 15:03:25.339658    2959 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 status
E1212 15:03:13.547446    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-004000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-004000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-5x2w8" [7271473c-2fe3-40db-a5b3-82334861c6d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-5x2w8" [7271473c-2fe3-40db-a5b3-82334861c6d9] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008215861s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.169.0.5:31698
functional_test.go:1674: http://192.169.0.5:31698: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-5x2w8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.5:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.5:31698
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [efae0432-a1d1-48a4-9ac0-0db546cd03e3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010482096s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-004000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-004000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-004000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-004000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a983c446-756c-4f4b-bc2d-75de828c36cc] Pending
helpers_test.go:344: "sp-pod" [a983c446-756c-4f4b-bc2d-75de828c36cc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a983c446-756c-4f4b-bc2d-75de828c36cc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.01166247s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-004000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-004000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-004000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [17992b93-7402-4b01-a87f-d4ef0671a60f] Pending
helpers_test.go:344: "sp-pod" [17992b93-7402-4b01-a87f-d4ef0671a60f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [17992b93-7402-4b01-a87f-d4ef0671a60f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.012525482s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-004000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh -n functional-004000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cp functional-004000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1871292180/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh -n functional-004000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh -n functional-004000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-004000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-jbnzg" [c0cb306c-6a4b-45c9-84e0-ae76febffffd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-jbnzg" [c0cb306c-6a4b-45c9-84e0-ae76febffffd] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.021608158s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-004000 exec mysql-859648c796-jbnzg -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-004000 exec mysql-859648c796-jbnzg -- mysql -ppassword -e "show databases;": exit status 1 (163.971956ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-004000 exec mysql-859648c796-jbnzg -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-004000 exec mysql-859648c796-jbnzg -- mysql -ppassword -e "show databases;": exit status 1 (107.193134ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-004000 exec mysql-859648c796-jbnzg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1720/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo cat /etc/test/nested/copy/1720/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1720.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo cat /etc/ssl/certs/1720.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1720.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo cat /usr/share/ca-certificates/1720.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/17202.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo cat /etc/ssl/certs/17202.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/17202.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo cat /usr/share/ca-certificates/17202.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-004000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 ssh "sudo systemctl is-active crio": exit status 1 (169.178937ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-004000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-004000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-004000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-004000 image ls --format short --alsologtostderr:
I1212 15:03:26.731830    2993 out.go:296] Setting OutFile to fd 1 ...
I1212 15:03:26.732166    2993 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:26.732174    2993 out.go:309] Setting ErrFile to fd 2...
I1212 15:03:26.732178    2993 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:26.732387    2993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
I1212 15:03:26.733017    2993 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:26.733115    2993 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:26.733456    2993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:26.733501    2993 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:26.741633    2993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50606
I1212 15:03:26.742097    2993 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:26.742548    2993 main.go:141] libmachine: Using API Version  1
I1212 15:03:26.742564    2993 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:26.742783    2993 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:26.742890    2993 main.go:141] libmachine: (functional-004000) Calling .GetState
I1212 15:03:26.742979    2993 main.go:141] libmachine: (functional-004000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 15:03:26.743050    2993 main.go:141] libmachine: (functional-004000) DBG | hyperkit pid from json: 2223
I1212 15:03:26.744334    2993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:26.744355    2993 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:26.752606    2993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50608
I1212 15:03:26.752970    2993 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:26.753359    2993 main.go:141] libmachine: Using API Version  1
I1212 15:03:26.753379    2993 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:26.753598    2993 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:26.753706    2993 main.go:141] libmachine: (functional-004000) Calling .DriverName
I1212 15:03:26.753872    2993 ssh_runner.go:195] Run: systemctl --version
I1212 15:03:26.753893    2993 main.go:141] libmachine: (functional-004000) Calling .GetSSHHostname
I1212 15:03:26.753990    2993 main.go:141] libmachine: (functional-004000) Calling .GetSSHPort
I1212 15:03:26.754078    2993 main.go:141] libmachine: (functional-004000) Calling .GetSSHKeyPath
I1212 15:03:26.754209    2993 main.go:141] libmachine: (functional-004000) Calling .GetSSHUsername
I1212 15:03:26.754311    2993 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/functional-004000/id_rsa Username:docker}
I1212 15:03:26.814532    2993 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 15:03:26.842657    2993 main.go:141] libmachine: Making call to close driver server
I1212 15:03:26.842668    2993 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:26.842836    2993 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:26.842840    2993 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:26.842844    2993 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 15:03:26.842850    2993 main.go:141] libmachine: Making call to close driver server
I1212 15:03:26.842856    2993 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:26.842999    2993 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:26.843007    2993 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:26.843010    2993 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-004000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/library/minikube-local-cache-test | functional-004000 | bce89356b926c | 30B    |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
| docker.io/localhost/my-image                | functional-004000 | 79599ac2e3815 | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-004000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-004000 image ls --format table --alsologtostderr:
I1212 15:03:29.556021    3021 out.go:296] Setting OutFile to fd 1 ...
I1212 15:03:29.556330    3021 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:29.556336    3021 out.go:309] Setting ErrFile to fd 2...
I1212 15:03:29.556341    3021 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:29.556538    3021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
I1212 15:03:29.557169    3021 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:29.557270    3021 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:29.557671    3021 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:29.557718    3021 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:29.565814    3021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50646
I1212 15:03:29.566243    3021 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:29.566759    3021 main.go:141] libmachine: Using API Version  1
I1212 15:03:29.566771    3021 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:29.567106    3021 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:29.567236    3021 main.go:141] libmachine: (functional-004000) Calling .GetState
I1212 15:03:29.567338    3021 main.go:141] libmachine: (functional-004000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 15:03:29.567399    3021 main.go:141] libmachine: (functional-004000) DBG | hyperkit pid from json: 2223
I1212 15:03:29.568769    3021 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:29.568794    3021 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:29.576828    3021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50648
I1212 15:03:29.577194    3021 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:29.577555    3021 main.go:141] libmachine: Using API Version  1
I1212 15:03:29.577568    3021 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:29.577836    3021 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:29.577976    3021 main.go:141] libmachine: (functional-004000) Calling .DriverName
I1212 15:03:29.578139    3021 ssh_runner.go:195] Run: systemctl --version
I1212 15:03:29.578160    3021 main.go:141] libmachine: (functional-004000) Calling .GetSSHHostname
I1212 15:03:29.578249    3021 main.go:141] libmachine: (functional-004000) Calling .GetSSHPort
I1212 15:03:29.578332    3021 main.go:141] libmachine: (functional-004000) Calling .GetSSHKeyPath
I1212 15:03:29.578426    3021 main.go:141] libmachine: (functional-004000) Calling .GetSSHUsername
I1212 15:03:29.578523    3021 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/functional-004000/id_rsa Username:docker}
I1212 15:03:29.622592    3021 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 15:03:29.652105    3021 main.go:141] libmachine: Making call to close driver server
I1212 15:03:29.652116    3021 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:29.652267    3021 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:29.652301    3021 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:29.652328    3021 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 15:03:29.652338    3021 main.go:141] libmachine: Making call to close driver server
I1212 15:03:29.652344    3021 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:29.652503    3021 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:29.652502    3021 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:29.652510    3021 main.go:141] libmachine: Making call to close connection to plugin binary
2023/12/12 15:03:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-004000 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1
faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"79599ac2e38155afc2494990b940a839d119fe631fbbd07f718a8d119db5bdde","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-004000"],"size":"1240000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/ku
be-scheduler:v1.28.4"],"size":"60100000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"bce89356b926c79570d192d4859a5544f7dc271f333ccc449a7b00875fea7035","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-004000"],"size":"30"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2a
c618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-004000"],"size":"32900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-004000 image ls --format json --alsologtostderr:
I1212 15:03:29.352635    3017 out.go:296] Setting OutFile to fd 1 ...
I1212 15:03:29.352912    3017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:29.352918    3017 out.go:309] Setting ErrFile to fd 2...
I1212 15:03:29.352922    3017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:29.353118    3017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
I1212 15:03:29.353755    3017 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:29.353850    3017 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:29.354220    3017 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:29.354277    3017 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:29.362402    3017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50641
I1212 15:03:29.362823    3017 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:29.363288    3017 main.go:141] libmachine: Using API Version  1
I1212 15:03:29.363314    3017 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:29.363543    3017 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:29.363658    3017 main.go:141] libmachine: (functional-004000) Calling .GetState
I1212 15:03:29.363752    3017 main.go:141] libmachine: (functional-004000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 15:03:29.363825    3017 main.go:141] libmachine: (functional-004000) DBG | hyperkit pid from json: 2223
I1212 15:03:29.365130    3017 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:29.365152    3017 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:29.373363    3017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50643
I1212 15:03:29.373758    3017 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:29.374193    3017 main.go:141] libmachine: Using API Version  1
I1212 15:03:29.374213    3017 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:29.374441    3017 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:29.374619    3017 main.go:141] libmachine: (functional-004000) Calling .DriverName
I1212 15:03:29.374813    3017 ssh_runner.go:195] Run: systemctl --version
I1212 15:03:29.374838    3017 main.go:141] libmachine: (functional-004000) Calling .GetSSHHostname
I1212 15:03:29.374964    3017 main.go:141] libmachine: (functional-004000) Calling .GetSSHPort
I1212 15:03:29.375086    3017 main.go:141] libmachine: (functional-004000) Calling .GetSSHKeyPath
I1212 15:03:29.375231    3017 main.go:141] libmachine: (functional-004000) Calling .GetSSHUsername
I1212 15:03:29.375348    3017 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/functional-004000/id_rsa Username:docker}
I1212 15:03:29.421269    3017 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 15:03:29.473389    3017 main.go:141] libmachine: Making call to close driver server
I1212 15:03:29.473398    3017 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:29.473558    3017 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:29.473565    3017 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 15:03:29.473571    3017 main.go:141] libmachine: Making call to close driver server
I1212 15:03:29.473576    3017 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:29.473725    3017 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:29.473734    3017 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 15:03:29.473737    3017 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-004000 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-004000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: bce89356b926c79570d192d4859a5544f7dc271f333ccc449a7b00875fea7035
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-004000
size: "30"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-004000 image ls --format yaml --alsologtostderr:
I1212 15:03:26.924712    2997 out.go:296] Setting OutFile to fd 1 ...
I1212 15:03:26.925036    2997 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:26.925041    2997 out.go:309] Setting ErrFile to fd 2...
I1212 15:03:26.925046    2997 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:26.925232    2997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
I1212 15:03:26.925858    2997 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:26.925949    2997 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:26.926307    2997 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:26.926358    2997 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:26.934352    2997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50614
I1212 15:03:26.934807    2997 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:26.935260    2997 main.go:141] libmachine: Using API Version  1
I1212 15:03:26.935289    2997 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:26.935545    2997 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:26.935692    2997 main.go:141] libmachine: (functional-004000) Calling .GetState
I1212 15:03:26.935783    2997 main.go:141] libmachine: (functional-004000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 15:03:26.935852    2997 main.go:141] libmachine: (functional-004000) DBG | hyperkit pid from json: 2223
I1212 15:03:26.937123    2997 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:26.937145    2997 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:26.945366    2997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50616
I1212 15:03:26.945743    2997 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:26.946132    2997 main.go:141] libmachine: Using API Version  1
I1212 15:03:26.946146    2997 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:26.946363    2997 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:26.946461    2997 main.go:141] libmachine: (functional-004000) Calling .DriverName
I1212 15:03:26.946643    2997 ssh_runner.go:195] Run: systemctl --version
I1212 15:03:26.946664    2997 main.go:141] libmachine: (functional-004000) Calling .GetSSHHostname
I1212 15:03:26.946747    2997 main.go:141] libmachine: (functional-004000) Calling .GetSSHPort
I1212 15:03:26.946833    2997 main.go:141] libmachine: (functional-004000) Calling .GetSSHKeyPath
I1212 15:03:26.946912    2997 main.go:141] libmachine: (functional-004000) Calling .GetSSHUsername
I1212 15:03:26.946998    2997 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/functional-004000/id_rsa Username:docker}
I1212 15:03:26.992751    2997 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 15:03:27.012300    2997 main.go:141] libmachine: Making call to close driver server
I1212 15:03:27.012310    2997 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:27.012471    2997 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:27.012482    2997 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 15:03:27.012487    2997 main.go:141] libmachine: Making call to close driver server
I1212 15:03:27.012492    2997 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:27.012493    2997 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:27.012639    2997 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:27.012649    2997 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:27.012664    2997 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 ssh pgrep buildkitd: exit status 1 (148.933194ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image build -t localhost/my-image:functional-004000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 image build -t localhost/my-image:functional-004000 testdata/build --alsologtostderr: (1.93926277s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-004000 image build -t localhost/my-image:functional-004000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c68c11da49e9
Removing intermediate container c68c11da49e9
---> 9b6dd2a3873d
Step 3/3 : ADD content.txt /
---> 79599ac2e381
Successfully built 79599ac2e381
Successfully tagged localhost/my-image:functional-004000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-004000 image build -t localhost/my-image:functional-004000 testdata/build --alsologtostderr:
I1212 15:03:27.241699    3006 out.go:296] Setting OutFile to fd 1 ...
I1212 15:03:27.242022    3006 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:27.242027    3006 out.go:309] Setting ErrFile to fd 2...
I1212 15:03:27.242032    3006 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 15:03:27.242223    3006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17777-1259/.minikube/bin
I1212 15:03:27.243384    3006 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:27.244022    3006 config.go:182] Loaded profile config "functional-004000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 15:03:27.244423    3006 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:27.244469    3006 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:27.252546    3006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50627
I1212 15:03:27.252943    3006 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:27.253382    3006 main.go:141] libmachine: Using API Version  1
I1212 15:03:27.253394    3006 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:27.253631    3006 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:27.253733    3006 main.go:141] libmachine: (functional-004000) Calling .GetState
I1212 15:03:27.253819    3006 main.go:141] libmachine: (functional-004000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 15:03:27.253905    3006 main.go:141] libmachine: (functional-004000) DBG | hyperkit pid from json: 2223
I1212 15:03:27.255219    3006 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 15:03:27.255244    3006 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 15:03:27.263459    3006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50629
I1212 15:03:27.263826    3006 main.go:141] libmachine: () Calling .GetVersion
I1212 15:03:27.264198    3006 main.go:141] libmachine: Using API Version  1
I1212 15:03:27.264213    3006 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 15:03:27.264408    3006 main.go:141] libmachine: () Calling .GetMachineName
I1212 15:03:27.264508    3006 main.go:141] libmachine: (functional-004000) Calling .DriverName
I1212 15:03:27.264662    3006 ssh_runner.go:195] Run: systemctl --version
I1212 15:03:27.264685    3006 main.go:141] libmachine: (functional-004000) Calling .GetSSHHostname
I1212 15:03:27.264757    3006 main.go:141] libmachine: (functional-004000) Calling .GetSSHPort
I1212 15:03:27.264821    3006 main.go:141] libmachine: (functional-004000) Calling .GetSSHKeyPath
I1212 15:03:27.264904    3006 main.go:141] libmachine: (functional-004000) Calling .GetSSHUsername
I1212 15:03:27.264978    3006 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17777-1259/.minikube/machines/functional-004000/id_rsa Username:docker}
I1212 15:03:27.309892    3006 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1033519421.tar
I1212 15:03:27.309971    3006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 15:03:27.317231    3006 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1033519421.tar
I1212 15:03:27.321191    3006 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1033519421.tar: stat -c "%s %y" /var/lib/minikube/build/build.1033519421.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1033519421.tar': No such file or directory
I1212 15:03:27.321226    3006 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1033519421.tar --> /var/lib/minikube/build/build.1033519421.tar (3072 bytes)
I1212 15:03:27.348438    3006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1033519421
I1212 15:03:27.356748    3006 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1033519421 -xf /var/lib/minikube/build/build.1033519421.tar
I1212 15:03:27.365976    3006 docker.go:346] Building image: /var/lib/minikube/build/build.1033519421
I1212 15:03:27.366062    3006 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-004000 /var/lib/minikube/build/build.1033519421
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1212 15:03:29.074134    3006 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-004000 /var/lib/minikube/build/build.1033519421: (1.708069661s)
I1212 15:03:29.074192    3006 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1033519421
I1212 15:03:29.080725    3006 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1033519421.tar
I1212 15:03:29.089828    3006 build_images.go:207] Built localhost/my-image:functional-004000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1033519421.tar
I1212 15:03:29.089855    3006 build_images.go:123] succeeded building to: functional-004000
I1212 15:03:29.089859    3006 build_images.go:124] failed building to: 
I1212 15:03:29.089876    3006 main.go:141] libmachine: Making call to close driver server
I1212 15:03:29.089884    3006 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:29.090036    3006 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:29.090046    3006 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 15:03:29.090057    3006 main.go:141] libmachine: Making call to close driver server
I1212 15:03:29.090057    3006 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:29.090063    3006 main.go:141] libmachine: (functional-004000) Calling .Close
I1212 15:03:29.090165    3006 main.go:141] libmachine: (functional-004000) DBG | Closing plugin on server side
I1212 15:03:29.090212    3006 main.go:141] libmachine: Successfully made call to close driver server
I1212 15:03:29.090228    3006 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.347336777s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-004000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-004000 docker-env) && out/minikube-darwin-amd64 status -p functional-004000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-004000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image load --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 image load --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr: (3.023421493s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image load --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr
E1212 15:02:32.586153    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 image load --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr: (1.960723057s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.829672262s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-004000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image load --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 image load --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr: (3.18216726s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image save gcr.io/google-containers/addon-resizer:functional-004000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 image save gcr.io/google-containers/addon-resizer:functional-004000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.260946035s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image rm gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.130625114s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-004000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 image save --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-004000 image save --daemon gcr.io/google-containers/addon-resizer:functional-004000 --alsologtostderr: (1.255294523s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-004000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-004000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-004000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-srp82" [d2915307-ef16-4de6-8141-a63909614f25] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-srp82" [d2915307-ef16-4de6-8141-a63909614f25] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.023681237s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-004000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-004000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-004000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2697: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-004000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-004000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-004000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [995ce796-83da-4a49-b0a8-d3bdf2942f1f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [995ce796-83da-4a49-b0a8-d3bdf2942f1f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009214079s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 service list -o json
functional_test.go:1493: Took "381.20697ms" to run "out/minikube-darwin-amd64 -p functional-004000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.169.0.5:30549
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.169.0.5:30549
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-004000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.246.132 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-004000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "209.515055ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "78.512904ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "207.012533ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "78.28935ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port574672549/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702422194825368000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port574672549/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702422194825368000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port574672549/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702422194825368000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port574672549/001/test-1702422194825368000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (135.970069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 23:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 23:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 23:03 test-1702422194825368000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh cat /mount-9p/test-1702422194825368000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-004000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b8ae5a78-d6da-42e0-aeb7-05007b74bbf5] Pending
helpers_test.go:344: "busybox-mount" [b8ae5a78-d6da-42e0-aeb7-05007b74bbf5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b8ae5a78-d6da-42e0-aeb7-05007b74bbf5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b8ae5a78-d6da-42e0-aeb7-05007b74bbf5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.011594334s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-004000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port574672549/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port378592358/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (136.226146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port378592358/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 ssh "sudo umount -f /mount-9p": exit status 1 (136.343468ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-004000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port378592358/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup821389942/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup821389942/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup821389942/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T" /mount1: exit status 1 (173.427726ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-004000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-004000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup821389942/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup821389942/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-004000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup821389942/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.21s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-004000
--- PASS: TestFunctional/delete_addon-resizer_images (0.21s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-004000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-004000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-693000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-693000 --driver=hyperkit : (37.15465041s)
--- PASS: TestImageBuild/serial/Setup (37.15s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-693000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-693000: (1.177326773s)
--- PASS: TestImageBuild/serial/NormalBuild (1.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-693000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-693000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-693000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (101.9s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-267000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E1212 15:04:35.468737    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-267000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m41.904539306s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (101.90s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 addons enable ingress --alsologtostderr -v=5: (13.808722509s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-267000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-267000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.534242013s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-267000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-267000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8f0812b2-b0b4-4e17-9254-02d61daa6ab8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8f0812b2-b0b4-4e17-9254-02d61daa6ab8] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.008737346s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-267000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.169.0.7
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 addons disable ingress-dns --alsologtostderr -v=1: (4.020765515s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 addons disable ingress --alsologtostderr -v=1
E1212 15:06:51.610646    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-267000 addons disable ingress --alsologtostderr -v=1: (7.269486698s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-824000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E1212 15:07:19.308800    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:07:28.075335    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:28.080710    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:28.090859    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:28.111447    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:28.152771    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:28.233238    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:28.393370    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:28.714302    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:29.355075    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:30.635224    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:33.213345    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:38.334185    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:07:48.574756    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-824000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (47.28560164s)
--- PASS: TestJSONOutput/start/Command (47.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-824000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-824000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-824000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-824000 --output=json --user=testUser: (8.170732618s)
--- PASS: TestJSONOutput/stop/Command (8.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-448000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-448000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (385.291538ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0a7ac89-2a96-4efa-aa96-8e48c0f06e2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-448000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"284fd2b7-36ba-4b63-b1c4-23dd9fbbf24d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17777"}}
	{"specversion":"1.0","id":"2815a47e-4f36-41f3-a7eb-eba45a7acce5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig"}}
	{"specversion":"1.0","id":"72b68b88-ac86-4eaa-9bfa-2882344f9d93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"259c0021-2411-4c5a-861f-dba45170838b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"39f8c116-c509-407a-a7b9-17f19ba9a767","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube"}}
	{"specversion":"1.0","id":"37fc5f4f-d7db-42f7-b896-eee5a5b8fdba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a5d5370f-aaef-42f4-beae-bcb45b940845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-448000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-448000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (85.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-473000 --driver=hyperkit 
E1212 15:08:09.055402    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-473000 --driver=hyperkit : (37.300064485s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-475000 --driver=hyperkit 
E1212 15:08:50.015924    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-475000 --driver=hyperkit : (36.514756166s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-473000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-475000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-475000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-475000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-475000: (5.270673992s)
helpers_test.go:175: Cleaning up "first-473000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-473000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-473000: (5.313986177s)
--- PASS: TestMinikubeProfile (85.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (16.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-531000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-531000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.370048164s)
--- PASS: TestMountStart/serial/StartWithMountFirst (16.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-531000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-531000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-545000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-545000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.219050107s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-545000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-545000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.37s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-531000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-531000 --alsologtostderr -v=5: (2.364922515s)
--- PASS: TestMountStart/serial/DeleteFirst (2.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-545000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-545000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-545000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-545000: (2.216148455s)
--- PASS: TestMountStart/serial/Stop (2.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-545000
E1212 15:10:11.936048    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-545000: (15.153214443s)
--- PASS: TestMountStart/serial/RestartStopped (16.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-545000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-545000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (67.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-449000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-449000
E1212 15:12:28.074134    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-449000: (8.242776976s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr
E1212 15:12:42.280352    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:12:55.775464    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr: (59.2378053s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-449000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (67.59s)

                                                
                                    
x
+
TestPreload (148.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E1212 15:16:20.340995    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:16:48.041361    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:16:51.606175    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m17.672153753s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-354000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-354000 image pull gcr.io/k8s-minikube/busybox: (1.483908609s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-354000
E1212 15:17:28.071585    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-354000: (8.256190955s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-354000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E1212 15:18:14.665335    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-354000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (56.094252522s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-354000 image list
helpers_test.go:175: Cleaning up "test-preload-354000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-354000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-354000: (5.268255041s)
--- PASS: TestPreload (148.96s)

                                                
                                    
x
+
TestScheduledStopUnix (105.62s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-678000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-678000 --memory=2048 --driver=hyperkit : (34.086510817s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-678000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-678000 -n scheduled-stop-678000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-678000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-678000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-678000 -n scheduled-stop-678000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-678000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-678000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-678000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-678000: exit status 7 (67.464848ms)

                                                
                                                
-- stdout --
	scheduled-stop-678000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-678000 -n scheduled-stop-678000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-678000 -n scheduled-stop-678000: exit status 7 (67.009786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-678000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-678000
--- PASS: TestScheduledStopUnix (105.62s)

                                                
                                    
x
+
TestSkaffold (110.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1280059419 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-300000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-300000 --memory=2600 --driver=hyperkit : (35.462436605s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1280059419 run --minikube-profile skaffold-300000 --kube-context skaffold-300000 --status-check=true --port-forward=false --interactive=false
E1212 15:21:20.338574    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:21:51.639962    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1280059419 run --minikube-profile skaffold-300000 --kube-context skaffold-300000 --status-check=true --port-forward=false --interactive=false: (56.910009784s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-8566f79ff7-rvvgf" [73b6dbab-e5db-4af5-935c-dd28cbdb4554] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012661267s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-795b8f7fc8-22hln" [715a5d0e-1840-47ce-9a0e-7fd6fa434778] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008489195s
helpers_test.go:175: Cleaning up "skaffold-300000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-300000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-300000: (5.269841512s)
--- PASS: TestSkaffold (110.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (185.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.3029788283.exe start -p running-upgrade-575000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:133: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.3029788283.exe start -p running-upgrade-575000 --memory=2200 --vm-driver=hyperkit : (1m31.835989321s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-575000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1212 15:26:20.382529    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:26:51.649777    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:26:58.724437    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:26:58.730338    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:26:58.741351    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:26:58.761468    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:26:58.801640    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:26:58.883036    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:26:59.043158    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:26:59.363494    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:27:00.005453    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:27:01.285932    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:27:03.846406    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:27:08.967654    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:27:19.208089    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-575000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m27.129217708s)
helpers_test.go:175: Cleaning up "running-upgrade-575000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-575000
E1212 15:27:43.444896    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-575000: (5.271743458s)
--- PASS: TestRunningBinaryUpgrade (185.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (151.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m13.43238329s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-111000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-111000: (8.240913406s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-111000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-111000 status --format={{.Host}}: exit status 7 (68.941237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit : (32.611408078s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-111000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
E1212 15:29:42.613895    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (452.87884ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-111000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17777
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-111000
	    minikube start -p kubernetes-upgrade-111000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1110002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-111000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-111000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit : (31.537812433s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-111000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-111000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-111000: (5.263811761s)
--- PASS: TestKubernetesUpgrade (151.66s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.37s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17777
- KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current650169670/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current650169670/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current650169670/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current650169670/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.37s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.67s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17777
- KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2693727655/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
E1212 15:22:28.115354    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2693727655/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2693727655/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2693727655/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1860498237.exe start -p stopped-upgrade-449000 --memory=2200 --vm-driver=hyperkit 
E1212 15:28:20.650080    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1860498237.exe start -p stopped-upgrade-449000 --memory=2200 --vm-driver=hyperkit : (1m26.497480413s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1860498237.exe -p stopped-upgrade-449000 stop
version_upgrade_test.go:205: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1860498237.exe -p stopped-upgrade-449000 stop: (8.087805567s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-449000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:211: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-449000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (59.079892488s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.67s)

                                                
                                    
x
+
TestPause/serial/Start (49.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-097000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-097000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (49.953217367s)
--- PASS: TestPause/serial/Start (49.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-449000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-449000: (2.599447315s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-322000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-322000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (579.21071ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-322000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17777
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17777-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17777-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-322000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-322000 --driver=hyperkit : (37.406487309s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-322000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-097000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-097000 --alsologtostderr -v=1 --driver=hyperkit : (35.174865414s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-322000 --no-kubernetes --driver=hyperkit 
E1212 15:31:20.381837    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-322000 --no-kubernetes --driver=hyperkit : (13.741367355s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-322000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-322000 status -o json: exit status 2 (161.101175ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-322000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-322000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-322000: (2.430729025s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-322000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-322000 --no-kubernetes --driver=hyperkit : (18.265530623s)
--- PASS: TestNoKubernetes/serial/Start (18.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-097000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-097000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-097000 --output=json --layout=cluster: exit status 2 (164.947454ms)

                                                
                                                
-- stdout --
	{"Name":"pause-097000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-097000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.17s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-097000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-097000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-097000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-097000 --alsologtostderr -v=5: (5.268862001s)
--- PASS: TestPause/serial/DeletePaused (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-322000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-322000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (129.575207ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
E1212 15:31:51.646826    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (28.559263509s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (28.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E1212 15:31:58.722144    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (52.539621278s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-322000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-322000: (2.248156441s)
--- PASS: TestNoKubernetes/serial/Stop (2.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (17.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-322000 --driver=hyperkit 
E1212 15:32:26.453230    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:32:28.111142    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-322000 --driver=hyperkit : (17.145602925s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (17.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-322000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-322000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (130.31638ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (59.095070796s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9lnx9" [97910708-dfde-4e0f-841d-6c851230202c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9lnx9" [97910708-dfde-4e0f-841d-6c851230202c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.007356968s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tdw7v" [d058eab1-7850-49c6-8ab5-4bc3fec27b99] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014548377s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-htfmt" [3329121c-ae8d-4e28-8782-db9c81b9a174] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-htfmt" [3329121c-ae8d-4e28-8782-db9c81b9a174] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007139186s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (59.509259819s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (48.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (48.274463568s)
--- PASS: TestNetworkPlugins/group/false/Start (48.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pfkdn" [101d76c8-7e59-4c3c-bbdd-6961946a8b7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pfkdn" [101d76c8-7e59-4c3c-bbdd-6961946a8b7a] Running
E1212 15:34:54.704276    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.011258277s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f5hvw" [2cc063b8-8f9c-4e62-b500-e6aa5e7ad7a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f5hvw" [2cc063b8-8f9c-4e62-b500-e6aa5e7ad7a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.007106494s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (51.668798496s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (59.268080221s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rlxmq" [ac6b2163-6d69-4db5-b346-b9163b92f94c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rlxmq" [ac6b2163-6d69-4db5-b346-b9163b92f94c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.008176357s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-s24sb" [674e3b12-19df-426c-985e-01c72d0cbb8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.01296427s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vsxwl" [860bfde5-0c2d-43ae-a07b-15e4252aba3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vsxwl" [860bfde5-0c2d-43ae-a07b-15e4252aba3b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00804065s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m28.811671992s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E1212 15:37:28.108977    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:37:45.447698    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:45.452911    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:45.464618    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:45.484974    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:45.525866    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:45.607859    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:45.768432    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:46.090140    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:46.731278    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:48.011536    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:50.572918    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:37:55.695201    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-246000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (52.454143873s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sxpm5" [0660ba47-22ad-40a9-bce3-9b8e58fd18b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sxpm5" [0660ba47-22ad-40a9-bce3-9b8e58fd18b3] Running
E1212 15:38:05.944781    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.008762406s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-246000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-246000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g6dg8" [f93fd354-49d5-4ddd-9eed-86202fd95573] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g6dg8" [f93fd354-49d5-4ddd-9eed-86202fd95573] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.010482821s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-246000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-246000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E1212 15:53:39.102703    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:53:40.617923    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:54:08.577335    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:54:42.559700    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:54:46.553834    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:55:01.292724    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:55:02.229745    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:55:10.249637    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-711000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1212 15:38:26.429678    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-711000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m32.535389622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-823000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 15:38:39.116594    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:39.122356    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:39.132511    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:39.154278    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:39.195665    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:39.275892    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:39.437220    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:39.757677    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:40.399309    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:41.681271    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:44.241449    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:49.362006    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:38:59.602321    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:39:07.390770    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:39:20.082990    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-823000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (1m6.537288538s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-823000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ff21f004-b303-4409-939e-ccfb0921fb40] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ff21f004-b303-4409-939e-ccfb0921fb40] Running
E1212 15:39:46.490284    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:46.496422    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:46.507690    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:46.528133    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:46.570199    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:46.650325    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:46.811914    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:47.132028    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:47.773059    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:39:49.053716    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.018174463s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-823000 exec busybox -- /bin/sh -c "ulimit -n"
E1212 15:39:51.614710    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-823000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-823000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-823000 --alsologtostderr -v=3
E1212 15:39:56.735526    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-823000 --alsologtostderr -v=3: (8.303375992s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-823000 -n no-preload-823000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-823000 -n no-preload-823000: exit status 7 (69.000109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-823000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 15:40:01.043341    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (301.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-823000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 15:40:01.227464    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:01.233428    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:01.245191    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:01.265359    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:01.306010    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:01.386498    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:01.546642    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:01.866916    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:02.508959    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:03.789114    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:06.350910    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:06.976649    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:40:11.471527    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:21.713119    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:40:27.456548    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:40:29.310217    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:40:31.184361    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:40:42.193361    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-823000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (5m0.992743259s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-823000 -n no-preload-823000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (301.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-711000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a4a7870-c1e5-4c05-b66d-4a88cb338daa] Pending
helpers_test.go:344: "busybox" [4a4a7870-c1e5-4c05-b66d-4a88cb338daa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4a4a7870-c1e5-4c05-b66d-4a88cb338daa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.01753054s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-711000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-711000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-711000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-711000 --alsologtostderr -v=3
E1212 15:41:08.417391    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:41:08.554422    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:08.560579    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:08.571052    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:08.591601    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:08.633011    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:08.713265    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:08.874088    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:09.194378    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:09.834482    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:11.115358    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:13.675517    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-711000 --alsologtostderr -v=3: (8.259692423s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-711000 -n old-k8s-version-711000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-711000 -n old-k8s-version-711000: exit status 7 (67.87906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-711000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (457.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-711000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1212 15:41:18.796119    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:20.389104    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:41:22.963043    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:41:23.152959    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:41:29.037899    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:29.155359    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:29.160892    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:29.172973    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:29.194687    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:29.235619    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:29.316769    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:29.476969    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:29.797359    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:30.439008    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:31.720038    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:34.280589    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:39.400852    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:49.517930    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:41:49.641337    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:41:51.653227    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:41:58.730232    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:42:10.121286    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:42:28.117752    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:42:30.336734    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:42:30.477637    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:42:45.073667    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:42:45.457489    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:42:51.081520    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:42:56.618280    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:56.623733    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:56.634679    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:56.655433    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:56.695567    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:56.775758    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:56.937082    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:57.258340    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:57.898969    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:42:59.180651    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:43:01.740750    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:43:06.539546    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:06.545015    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:06.556719    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:06.577027    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:06.617758    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:06.698349    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:06.860454    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:06.860769    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:43:07.181902    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:07.822717    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:09.103366    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:11.664273    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:13.148328    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:43:16.784455    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:17.103247    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:43:21.819971    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:43:27.024706    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:37.583038    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:43:39.112825    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:43:47.504772    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:43:52.396853    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:44:06.801611    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
E1212 15:44:13.000650    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:44:18.542836    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:44:23.448159    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
E1212 15:44:28.465051    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:44:46.485373    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:45:01.223508    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-711000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (7m37.032168858s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-711000 -n old-k8s-version-711000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (457.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gm4kx" [4ebd5980-077d-4dbd-9294-52b701f4be51] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011063612s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gm4kx" [4ebd5980-077d-4dbd-9294-52b701f4be51] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008418303s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-823000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-823000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-823000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-823000 -n no-preload-823000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-823000 -n no-preload-823000: exit status 2 (165.109659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-823000 -n no-preload-823000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-823000 -n no-preload-823000: exit status 2 (164.503653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-823000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-823000 -n no-preload-823000
E1212 15:45:14.175321    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-823000 -n no-preload-823000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-949000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 15:45:28.912507    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:45:40.462043    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:45:50.384091    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:46:08.550704    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-949000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (49.339346944s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-949000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9dc5858-2a2d-4acb-961a-74013b3468aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9dc5858-2a2d-4acb-961a-74013b3468aa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.016632492s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-949000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-949000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-949000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-949000 --alsologtostderr -v=3
E1212 15:46:20.384801    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-949000 --alsologtostderr -v=3: (8.233248611s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-949000 -n embed-certs-949000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-949000 -n embed-certs-949000: exit status 7 (67.882578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-949000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (300.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-949000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 15:46:29.152254    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:46:36.234563    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:46:51.648973    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:46:56.839887    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
E1212 15:46:58.724486    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:47:28.112662    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
E1212 15:47:45.452493    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:47:56.614305    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:48:06.535261    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:48:24.300722    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:48:34.222567    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
E1212 15:48:39.107372    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kindnet-246000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-949000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (5m0.104329923s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-949000 -n embed-certs-949000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (300.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qss6b" [e984f7de-57fb-4a2c-a17b-e640e3f2d4d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012896851s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qss6b" [e984f7de-57fb-4a2c-a17b-e640e3f2d4d7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007062277s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-711000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-711000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-711000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-711000 -n old-k8s-version-711000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-711000 -n old-k8s-version-711000: exit status 2 (165.195537ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-711000 -n old-k8s-version-711000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-711000 -n old-k8s-version-711000: exit status 2 (165.148481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-711000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-711000 -n old-k8s-version-711000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-711000 -n old-k8s-version-711000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-583000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 15:49:42.487911    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:42.493672    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:42.504032    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:42.526052    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:42.566984    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:42.647428    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:42.808287    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:43.129726    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:43.770618    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:45.051341    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:46.481888    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/custom-flannel-246000/client.crt: no such file or directory
E1212 15:49:47.612813    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:49:52.733284    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:50:01.219445    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/false-246000/client.crt: no such file or directory
E1212 15:50:02.973443    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-583000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (1m1.883704759s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-583000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c283c57-6cea-4254-98bd-0bbd5b419ca4] Pending
helpers_test.go:344: "busybox" [5c283c57-6cea-4254-98bd-0bbd5b419ca4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5c283c57-6cea-4254-98bd-0bbd5b419ca4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.017092721s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-583000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-583000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 15:50:23.453379    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-583000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-583000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-583000 --alsologtostderr -v=3: (8.264214756s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000: exit status 7 (68.633811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-583000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-583000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 15:50:56.772581    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:56.777725    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:56.787962    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:56.809353    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:56.850431    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:56.931476    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:57.091647    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:57.412294    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:58.053157    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:50:59.333720    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:51:01.894269    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:51:04.413179    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:51:07.014908    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:51:08.546497    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/enable-default-cni-246000/client.crt: no such file or directory
E1212 15:51:17.255842    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:51:20.379103    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/ingress-addon-legacy-267000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-583000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (4m56.058915719s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vjhqc" [a35f8321-933e-457c-bd5e-10f01cafa087] Running
E1212 15:51:29.147003    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/flannel-246000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012745005s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vjhqc" [a35f8321-933e-457c-bd5e-10f01cafa087] Running
E1212 15:51:34.704875    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:51:37.736756    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007435739s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-949000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-949000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-949000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-949000 -n embed-certs-949000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-949000 -n embed-certs-949000: exit status 2 (157.227119ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-949000 -n embed-certs-949000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-949000 -n embed-certs-949000: exit status 2 (159.557128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-949000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-949000 -n embed-certs-949000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-949000 -n embed-certs-949000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-715000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 15:51:51.643955    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/addons-609000/client.crt: no such file or directory
E1212 15:51:58.720006    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/skaffold-300000/client.crt: no such file or directory
E1212 15:52:18.698187    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/old-k8s-version-711000/client.crt: no such file or directory
E1212 15:52:26.332562    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/no-preload-823000/client.crt: no such file or directory
E1212 15:52:28.108073    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/functional-004000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-715000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (47.233995522s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-715000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-715000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.01552524s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-715000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-715000 --alsologtostderr -v=3: (8.296466822s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-715000 -n newest-cni-715000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-715000 -n newest-cni-715000: exit status 7 (67.20889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-715000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-715000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 15:52:45.448082    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/auto-246000/client.crt: no such file or directory
E1212 15:52:56.609789    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/kubenet-246000/client.crt: no such file or directory
E1212 15:53:06.530925    1720 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/bridge-246000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-715000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (37.457574323s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-715000 -n newest-cni-715000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-715000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-715000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-715000 -n newest-cni-715000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-715000 -n newest-cni-715000: exit status 2 (157.410403ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-715000 -n newest-cni-715000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-715000 -n newest-cni-715000: exit status 2 (157.969048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-715000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-715000 -n newest-cni-715000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-715000 -n newest-cni-715000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rvvj4" [51a5e7e3-0e4b-4cf0-8731-2a84497e777d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01292074s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rvvj4" [51a5e7e3-0e4b-4cf0-8731-2a84497e777d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007066864s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-583000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-583000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000: exit status 2 (166.946935ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000: exit status 2 (165.887883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-583000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-583000 -n default-k8s-diff-port-583000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.89s)

                                                
                                    

Test skip (22/323)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-246000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-246000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/jenkins/minikube-integration/17777-1259/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 15:15:10 PST
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.169.0.14:8443
name: multinode-449000-m01
contexts:
- context:
cluster: multinode-449000-m01
extensions:
- extension:
last-update: Tue, 12 Dec 2023 15:15:10 PST
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: multinode-449000-m01
name: multinode-449000-m01
current-context: ""
kind: Config
preferences: {}
users:
- name: multinode-449000-m01
user:
client-certificate: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m01/client.crt
client-key: /Users/jenkins/minikube-integration/17777-1259/.minikube/profiles/multinode-449000-m01/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-246000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-246000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246000"

                                                
                                                
----------------------- debugLogs end: cilium-246000 [took: 5.503641093s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-246000
--- SKIP: TestNetworkPlugins/group/cilium (5.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-769000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-769000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard